00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3690 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3291 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.093 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/iscsi-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.094 The recommended git tool is: git 00:00:00.094 using credential 00000000-0000-0000-0000-000000000002 00:00:00.095 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/iscsi-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.130 Fetching changes from the remote Git repository 00:00:00.132 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.164 Using shallow fetch with depth 1 00:00:00.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.164 > git --version # timeout=10 00:00:00.188 > git --version # 'git version 2.39.2' 00:00:00.188 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.204 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.204 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.363 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.373 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.382 Checking out Revision 456d80899d5187c68de113852b37bde1201fd33a (FETCH_HEAD) 00:00:06.382 > git config core.sparsecheckout # timeout=10 00:00:06.392 > git read-tree -mu HEAD # timeout=10 00:00:06.407 > git checkout -f 456d80899d5187c68de113852b37bde1201fd33a # timeout=5 00:00:06.430 Commit message: "jenkins/config: Drop WFP25 for maintenance" 00:00:06.430 > git rev-list --no-walk 456d80899d5187c68de113852b37bde1201fd33a # timeout=10 00:00:06.508 [Pipeline] Start of Pipeline 00:00:06.522 [Pipeline] library 00:00:06.524 Loading library shm_lib@master 00:00:06.524 Library shm_lib@master is cached. Copying from home. 00:00:06.538 [Pipeline] node 00:00:06.546 Running on VM-host-SM4 in /var/jenkins/workspace/iscsi-uring-vg-autotest 00:00:06.547 [Pipeline] { 00:00:06.556 [Pipeline] catchError 00:00:06.557 [Pipeline] { 00:00:06.569 [Pipeline] wrap 00:00:06.579 [Pipeline] { 00:00:06.585 [Pipeline] stage 00:00:06.586 [Pipeline] { (Prologue) 00:00:06.601 [Pipeline] echo 00:00:06.602 Node: VM-host-SM4 00:00:06.606 [Pipeline] cleanWs 00:00:06.614 [WS-CLEANUP] Deleting project workspace... 00:00:06.614 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.620 [WS-CLEANUP] done 00:00:06.769 [Pipeline] setCustomBuildProperty 00:00:06.824 [Pipeline] httpRequest 00:00:06.837 [Pipeline] echo 00:00:06.838 Sorcerer 10.211.164.101 is alive 00:00:06.844 [Pipeline] httpRequest 00:00:06.848 HttpMethod: GET 00:00:06.849 URL: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:06.849 Sending request to url: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:06.862 Response Code: HTTP/1.1 200 OK 00:00:06.862 Success: Status code 200 is in the accepted range: 200,404 00:00:06.863 Saving response body to /var/jenkins/workspace/iscsi-uring-vg-autotest/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:18.179 [Pipeline] sh 00:00:18.463 + tar --no-same-owner -xf jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:18.479 [Pipeline] httpRequest 00:00:18.529 [Pipeline] echo 00:00:18.530 Sorcerer 10.211.164.101 is alive 00:00:18.540 [Pipeline] httpRequest 00:00:18.546 HttpMethod: GET 00:00:18.547 URL: http://10.211.164.101/packages/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:18.547 Sending request to url: http://10.211.164.101/packages/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:18.568 Response Code: HTTP/1.1 200 OK 00:00:18.569 Success: Status code 200 is in the accepted range: 200,404 00:00:18.569 Saving response body to /var/jenkins/workspace/iscsi-uring-vg-autotest/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:01:22.926 [Pipeline] sh 00:01:23.232 + tar --no-same-owner -xf spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:01:25.780 [Pipeline] sh 00:01:26.059 + git -C spdk log --oneline -n5 00:01:26.060 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:01:26.060 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:01:26.060 79fce488b test/scheduler: test scheduling period with dynamic scheduler 00:01:26.060 673f37314 ut/nvme_pcie: allocate nvme_pcie_qpair instead of spdk_nvme_qpair 00:01:26.060 084afa904 util: copy errno before calling stdlib's functions 00:01:26.079 [Pipeline] withCredentials 00:01:26.092 > git --version # timeout=10 00:01:26.106 > git --version # 'git version 2.39.2' 00:01:26.123 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:26.126 [Pipeline] { 00:01:26.136 [Pipeline] retry 00:01:26.138 [Pipeline] { 00:01:26.156 [Pipeline] sh 00:01:26.436 + git ls-remote http://dpdk.org/git/dpdk main 00:01:27.826 [Pipeline] } 00:01:27.852 [Pipeline] // retry 00:01:27.860 [Pipeline] } 00:01:27.878 [Pipeline] // withCredentials 00:01:27.886 [Pipeline] httpRequest 00:01:27.904 [Pipeline] echo 00:01:27.905 Sorcerer 10.211.164.101 is alive 00:01:27.913 [Pipeline] httpRequest 00:01:27.917 HttpMethod: GET 00:01:27.917 URL: http://10.211.164.101/packages/dpdk_548de9091c85467bd8f05cecf6d32315869d1461.tar.gz 00:01:27.918 Sending request to url: http://10.211.164.101/packages/dpdk_548de9091c85467bd8f05cecf6d32315869d1461.tar.gz 00:01:27.931 Response Code: HTTP/1.1 200 OK 00:01:27.931 Success: Status code 200 is in the accepted range: 200,404 00:01:27.932 Saving response body to /var/jenkins/workspace/iscsi-uring-vg-autotest/dpdk_548de9091c85467bd8f05cecf6d32315869d1461.tar.gz 00:01:35.022 [Pipeline] sh 00:01:35.301 + tar --no-same-owner -xf dpdk_548de9091c85467bd8f05cecf6d32315869d1461.tar.gz 00:01:36.685 [Pipeline] sh 00:01:36.962 + git -C dpdk log --oneline -n5 00:01:36.962 548de9091c examples: fix port ID restriction 00:01:36.962 4b97893816 examples: fix lcore ID restriction 00:01:36.962 b23c5bd71a examples: fix queue ID restriction 00:01:36.962 389fca7577 app/testpmd: restore deprecated VXLAN-GPE item support 00:01:36.962 18513927a8 net/ice: fix E830 PTP PHY model 00:01:36.978 [Pipeline] writeFile 00:01:36.992 [Pipeline] sh 00:01:37.267 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:37.278 [Pipeline] sh 00:01:37.557 + cat autorun-spdk.conf 00:01:37.557 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.557 SPDK_TEST_ISCSI=1 00:01:37.557 SPDK_TEST_URING=1 00:01:37.557 SPDK_RUN_UBSAN=1 00:01:37.557 SPDK_TEST_NATIVE_DPDK=main 00:01:37.557 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:37.557 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.564 RUN_NIGHTLY=1 00:01:37.566 [Pipeline] } 00:01:37.582 [Pipeline] // stage 00:01:37.592 [Pipeline] stage 00:01:37.594 [Pipeline] { (Run VM) 00:01:37.604 [Pipeline] sh 00:01:37.883 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:37.883 + echo 'Start stage prepare_nvme.sh' 00:01:37.883 Start stage prepare_nvme.sh 00:01:37.883 + [[ -n 6 ]] 00:01:37.883 + disk_prefix=ex6 00:01:37.883 + [[ -n /var/jenkins/workspace/iscsi-uring-vg-autotest ]] 00:01:37.883 + [[ -e /var/jenkins/workspace/iscsi-uring-vg-autotest/autorun-spdk.conf ]] 00:01:37.883 + source /var/jenkins/workspace/iscsi-uring-vg-autotest/autorun-spdk.conf 00:01:37.883 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.883 ++ SPDK_TEST_ISCSI=1 00:01:37.883 ++ SPDK_TEST_URING=1 00:01:37.883 ++ SPDK_RUN_UBSAN=1 00:01:37.883 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:37.883 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:37.883 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.883 ++ RUN_NIGHTLY=1 00:01:37.883 + cd /var/jenkins/workspace/iscsi-uring-vg-autotest 00:01:37.883 + nvme_files=() 00:01:37.883 + declare -A nvme_files 00:01:37.883 + backend_dir=/var/lib/libvirt/images/backends 00:01:37.883 + nvme_files['nvme.img']=5G 00:01:37.883 + nvme_files['nvme-cmb.img']=5G 00:01:37.883 + nvme_files['nvme-multi0.img']=4G 00:01:37.883 + nvme_files['nvme-multi1.img']=4G 00:01:37.883 + nvme_files['nvme-multi2.img']=4G 00:01:37.883 + nvme_files['nvme-openstack.img']=8G 00:01:37.883 + nvme_files['nvme-zns.img']=5G 00:01:37.883 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:37.883 + (( SPDK_TEST_FTL == 1 )) 00:01:37.883 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:37.883 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:37.883 + for nvme in "${!nvme_files[@]}" 00:01:37.883 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:37.883 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.883 + for nvme in "${!nvme_files[@]}" 00:01:37.883 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:37.883 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.883 + for nvme in "${!nvme_files[@]}" 00:01:37.883 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:37.883 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:37.883 + for nvme in "${!nvme_files[@]}" 00:01:37.883 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:37.883 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.883 + for nvme in "${!nvme_files[@]}" 00:01:37.883 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:37.883 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.883 + for nvme in "${!nvme_files[@]}" 00:01:37.883 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:38.142 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:38.142 + for nvme in "${!nvme_files[@]}" 00:01:38.142 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:38.142 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:38.142 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:38.142 + echo 'End stage prepare_nvme.sh' 00:01:38.142 End stage prepare_nvme.sh 00:01:38.153 [Pipeline] sh 00:01:38.433 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:38.433 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora38 00:01:38.433 00:01:38.433 DIR=/var/jenkins/workspace/iscsi-uring-vg-autotest/spdk/scripts/vagrant 00:01:38.433 SPDK_DIR=/var/jenkins/workspace/iscsi-uring-vg-autotest/spdk 00:01:38.433 VAGRANT_TARGET=/var/jenkins/workspace/iscsi-uring-vg-autotest 00:01:38.433 HELP=0 00:01:38.433 DRY_RUN=0 00:01:38.433 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:01:38.433 NVME_DISKS_TYPE=nvme,nvme, 00:01:38.433 NVME_AUTO_CREATE=0 00:01:38.433 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:01:38.433 NVME_CMB=,, 00:01:38.433 NVME_PMR=,, 00:01:38.433 NVME_ZNS=,, 00:01:38.433 NVME_MS=,, 00:01:38.433 NVME_FDP=,, 00:01:38.433 SPDK_VAGRANT_DISTRO=fedora38 00:01:38.433 SPDK_VAGRANT_VMCPU=10 00:01:38.433 SPDK_VAGRANT_VMRAM=12288 00:01:38.433 SPDK_VAGRANT_PROVIDER=libvirt 00:01:38.433 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:38.433 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:38.433 SPDK_OPENSTACK_NETWORK=0 00:01:38.433 VAGRANT_PACKAGE_BOX=0 00:01:38.433 VAGRANTFILE=/var/jenkins/workspace/iscsi-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:38.433 FORCE_DISTRO=true 00:01:38.433 VAGRANT_BOX_VERSION= 00:01:38.433 EXTRA_VAGRANTFILES= 00:01:38.433 NIC_MODEL=e1000 00:01:38.433 00:01:38.433 mkdir: created directory '/var/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt' 00:01:38.433 /var/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/iscsi-uring-vg-autotest 00:01:40.967 Bringing machine 'default' up with 'libvirt' provider... 00:01:41.902 ==> default: Creating image (snapshot of base box volume). 00:01:41.902 ==> default: Creating domain with the following settings... 00:01:41.902 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721772193_812afce9f20a5cad427d 00:01:41.902 ==> default: -- Domain type: kvm 00:01:41.902 ==> default: -- Cpus: 10 00:01:41.902 ==> default: -- Feature: acpi 00:01:41.902 ==> default: -- Feature: apic 00:01:41.902 ==> default: -- Feature: pae 00:01:41.902 ==> default: -- Memory: 12288M 00:01:41.902 ==> default: -- Memory Backing: hugepages: 00:01:41.902 ==> default: -- Management MAC: 00:01:41.902 ==> default: -- Loader: 00:01:41.902 ==> default: -- Nvram: 00:01:41.902 ==> default: -- Base box: spdk/fedora38 00:01:41.902 ==> default: -- Storage pool: default 00:01:41.902 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721772193_812afce9f20a5cad427d.img (20G) 00:01:41.902 ==> default: -- Volume Cache: default 00:01:41.902 ==> default: -- Kernel: 00:01:41.902 ==> default: -- Initrd: 00:01:41.902 ==> default: -- Graphics Type: vnc 00:01:41.902 ==> default: -- Graphics Port: -1 00:01:41.902 ==> default: -- Graphics IP: 127.0.0.1 00:01:41.902 ==> default: -- Graphics Password: Not defined 00:01:41.902 ==> default: -- Video Type: cirrus 00:01:41.902 ==> default: -- Video VRAM: 9216 00:01:41.902 ==> default: -- Sound Type: 00:01:41.902 ==> default: -- Keymap: en-us 00:01:41.902 ==> default: -- TPM Path: 00:01:41.902 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:41.902 ==> default: -- Command line args: 00:01:41.902 ==> default: -> value=-device, 00:01:41.902 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:41.902 ==> default: -> value=-drive, 00:01:41.902 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:41.902 ==> default: -> value=-device, 00:01:41.902 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.902 ==> default: -> value=-device, 00:01:41.902 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:41.902 ==> default: -> value=-drive, 00:01:41.902 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:41.902 ==> default: -> value=-device, 00:01:41.902 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.902 ==> default: -> value=-drive, 00:01:41.902 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:41.902 ==> default: -> value=-device, 00:01:41.902 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.902 ==> default: -> value=-drive, 00:01:41.902 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:41.902 ==> default: -> value=-device, 00:01:41.902 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.902 ==> default: Creating shared folders metadata... 00:01:41.902 ==> default: Starting domain. 00:01:43.278 ==> default: Waiting for domain to get an IP address... 00:02:05.207 ==> default: Waiting for SSH to become available... 00:02:05.207 ==> default: Configuring and enabling network interfaces... 00:02:07.740 default: SSH address: 192.168.121.19:22 00:02:07.740 default: SSH username: vagrant 00:02:07.740 default: SSH auth method: private key 00:02:10.273 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/iscsi-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:18.436 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/iscsi-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:25.004 ==> default: Mounting SSHFS shared folder... 00:02:26.381 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:26.381 ==> default: Checking Mount.. 00:02:27.757 ==> default: Folder Successfully Mounted! 00:02:27.757 ==> default: Running provisioner: file... 00:02:28.325 default: ~/.gitconfig => .gitconfig 00:02:28.892 00:02:28.892 SUCCESS! 00:02:28.892 00:02:28.892 cd to /var/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:28.892 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:28.892 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:28.892 00:02:28.903 [Pipeline] } 00:02:28.922 [Pipeline] // stage 00:02:28.932 [Pipeline] dir 00:02:28.933 Running in /var/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt 00:02:28.934 [Pipeline] { 00:02:28.949 [Pipeline] catchError 00:02:28.951 [Pipeline] { 00:02:28.965 [Pipeline] sh 00:02:29.247 + vagrant ssh-config --host vagrant 00:02:29.247 + sed -ne /^Host/,$p 00:02:29.247 + tee ssh_conf 00:02:32.536 Host vagrant 00:02:32.536 HostName 192.168.121.19 00:02:32.536 User vagrant 00:02:32.536 Port 22 00:02:32.536 UserKnownHostsFile /dev/null 00:02:32.536 StrictHostKeyChecking no 00:02:32.536 PasswordAuthentication no 00:02:32.536 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:32.536 IdentitiesOnly yes 00:02:32.536 LogLevel FATAL 00:02:32.536 ForwardAgent yes 00:02:32.536 ForwardX11 yes 00:02:32.536 00:02:32.549 [Pipeline] withEnv 00:02:32.551 [Pipeline] { 00:02:32.562 [Pipeline] sh 00:02:32.892 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:32.892 source /etc/os-release 00:02:32.892 [[ -e /image.version ]] && img=$(< /image.version) 00:02:32.892 # Minimal, systemd-like check. 00:02:32.892 if [[ -e /.dockerenv ]]; then 00:02:32.892 # Clear garbage from the node's name: 00:02:32.892 # agt-er_autotest_547-896 -> autotest_547-896 00:02:32.892 # $HOSTNAME is the actual container id 00:02:32.892 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:32.892 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:32.892 # We can assume this is a mount from a host where container is running, 00:02:32.892 # so fetch its hostname to easily identify the target swarm worker. 00:02:32.892 container="$(< /etc/hostname) ($agent)" 00:02:32.892 else 00:02:32.892 # Fallback 00:02:32.892 container=$agent 00:02:32.892 fi 00:02:32.892 fi 00:02:32.892 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:32.892 00:02:32.903 [Pipeline] } 00:02:32.923 [Pipeline] // withEnv 00:02:32.931 [Pipeline] setCustomBuildProperty 00:02:32.945 [Pipeline] stage 00:02:32.947 [Pipeline] { (Tests) 00:02:32.965 [Pipeline] sh 00:02:33.244 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:33.516 [Pipeline] sh 00:02:33.798 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:34.071 [Pipeline] timeout 00:02:34.072 Timeout set to expire in 45 min 00:02:34.074 [Pipeline] { 00:02:34.089 [Pipeline] sh 00:02:34.368 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:34.936 HEAD is now at 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:02:34.948 [Pipeline] sh 00:02:35.228 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:35.501 [Pipeline] sh 00:02:35.782 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:36.057 [Pipeline] sh 00:02:36.338 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=iscsi-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:36.598 ++ readlink -f spdk_repo 00:02:36.598 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:36.598 + [[ -n /home/vagrant/spdk_repo ]] 00:02:36.598 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:36.598 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:36.598 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:36.598 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:36.598 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:36.598 + [[ iscsi-uring-vg-autotest == pkgdep-* ]] 00:02:36.598 + cd /home/vagrant/spdk_repo 00:02:36.598 + source /etc/os-release 00:02:36.598 ++ NAME='Fedora Linux' 00:02:36.598 ++ VERSION='38 (Cloud Edition)' 00:02:36.598 ++ ID=fedora 00:02:36.598 ++ VERSION_ID=38 00:02:36.598 ++ VERSION_CODENAME= 00:02:36.598 ++ PLATFORM_ID=platform:f38 00:02:36.598 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:36.598 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:36.598 ++ LOGO=fedora-logo-icon 00:02:36.598 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:36.598 ++ HOME_URL=https://fedoraproject.org/ 00:02:36.598 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:36.598 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:36.598 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:36.598 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:36.598 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:36.598 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:36.598 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:36.598 ++ SUPPORT_END=2024-05-14 00:02:36.598 ++ VARIANT='Cloud Edition' 00:02:36.598 ++ VARIANT_ID=cloud 00:02:36.598 + uname -a 00:02:36.598 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:36.598 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:37.167 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:37.167 Hugepages 00:02:37.167 node hugesize free / total 00:02:37.167 node0 1048576kB 0 / 0 00:02:37.167 node0 2048kB 0 / 0 00:02:37.167 00:02:37.167 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:37.167 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:37.167 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:37.167 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:37.167 + rm -f /tmp/spdk-ld-path 00:02:37.167 + source autorun-spdk.conf 00:02:37.167 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:37.167 ++ SPDK_TEST_ISCSI=1 00:02:37.167 ++ SPDK_TEST_URING=1 00:02:37.167 ++ SPDK_RUN_UBSAN=1 00:02:37.167 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:37.167 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:37.167 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:37.167 ++ RUN_NIGHTLY=1 00:02:37.167 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:37.167 + [[ -n '' ]] 00:02:37.167 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:37.167 + for M in /var/spdk/build-*-manifest.txt 00:02:37.167 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:37.167 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:37.167 + for M in /var/spdk/build-*-manifest.txt 00:02:37.167 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:37.167 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:37.167 ++ uname 00:02:37.167 + [[ Linux == \L\i\n\u\x ]] 00:02:37.167 + sudo dmesg -T 00:02:37.167 + sudo dmesg --clear 00:02:37.167 + dmesg_pid=5909 00:02:37.167 + [[ Fedora Linux == FreeBSD ]] 00:02:37.167 + sudo dmesg -Tw 00:02:37.167 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:37.167 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:37.167 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:37.167 + [[ -x /usr/src/fio-static/fio ]] 00:02:37.167 + export FIO_BIN=/usr/src/fio-static/fio 00:02:37.167 + FIO_BIN=/usr/src/fio-static/fio 00:02:37.167 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:37.167 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:37.167 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:37.167 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:37.167 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:37.167 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:37.167 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:37.167 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:37.167 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:37.167 Test configuration: 00:02:37.167 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:37.167 SPDK_TEST_ISCSI=1 00:02:37.167 SPDK_TEST_URING=1 00:02:37.167 SPDK_RUN_UBSAN=1 00:02:37.167 SPDK_TEST_NATIVE_DPDK=main 00:02:37.167 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:37.167 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:37.427 RUN_NIGHTLY=1 22:04:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:37.427 22:04:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:37.427 22:04:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:37.427 22:04:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:37.427 22:04:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.427 22:04:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.427 22:04:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.427 22:04:09 -- paths/export.sh@5 -- $ export PATH 00:02:37.427 22:04:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.427 22:04:09 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:37.427 22:04:09 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:37.427 22:04:09 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721772249.XXXXXX 00:02:37.427 22:04:09 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721772249.0jdwlv 00:02:37.427 22:04:09 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:37.427 22:04:09 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:02:37.427 22:04:09 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:37.427 22:04:09 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:37.427 22:04:09 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:37.427 22:04:09 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:37.427 22:04:09 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:37.427 22:04:09 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:37.427 22:04:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.427 22:04:09 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:37.427 22:04:09 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:37.427 22:04:09 -- pm/common@17 -- $ local monitor 00:02:37.427 22:04:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.427 22:04:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.427 22:04:09 -- pm/common@25 -- $ sleep 1 00:02:37.427 22:04:09 -- pm/common@21 -- $ date +%s 00:02:37.427 22:04:09 -- pm/common@21 -- $ date +%s 00:02:37.427 22:04:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721772249 00:02:37.427 22:04:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721772249 00:02:37.427 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721772249_collect-cpu-load.pm.log 00:02:37.427 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721772249_collect-vmstat.pm.log 00:02:38.365 22:04:10 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:38.365 22:04:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:38.365 22:04:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:38.365 22:04:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:38.365 22:04:10 -- spdk/autobuild.sh@16 -- $ date -u 00:02:38.365 Tue Jul 23 10:04:10 PM UTC 2024 00:02:38.365 22:04:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:38.365 v24.09-pre-309-g78cbcfdde 00:02:38.365 22:04:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:38.365 22:04:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:38.365 22:04:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:38.365 22:04:10 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:38.365 22:04:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:38.365 22:04:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.365 ************************************ 00:02:38.365 START TEST ubsan 00:02:38.365 ************************************ 00:02:38.365 using ubsan 00:02:38.365 22:04:10 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:38.365 00:02:38.365 real 0m0.000s 00:02:38.365 user 0m0.000s 00:02:38.365 sys 0m0.000s 00:02:38.365 22:04:10 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:38.365 ************************************ 00:02:38.365 END TEST ubsan 00:02:38.365 22:04:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:38.365 ************************************ 00:02:38.625 22:04:10 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:38.625 22:04:10 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:38.625 22:04:10 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:38.625 22:04:10 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:38.625 22:04:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:38.625 22:04:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.625 ************************************ 00:02:38.625 START TEST build_native_dpdk 00:02:38.625 ************************************ 00:02:38.625 22:04:10 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:38.625 548de9091c examples: fix port ID restriction 00:02:38.625 4b97893816 examples: fix lcore ID restriction 00:02:38.625 b23c5bd71a examples: fix queue ID restriction 00:02:38.625 389fca7577 app/testpmd: restore deprecated VXLAN-GPE item support 00:02:38.625 18513927a8 net/ice: fix E830 PTP PHY model 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc2 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc2 21.11.0 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 21.11.0 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:38.625 patching file config/rte_config.h 00:02:38.625 Hunk #1 succeeded at 70 (offset 11 lines). 00:02:38.625 22:04:10 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.07.0-rc2 24.07.0 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 24.07.0 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:02:38.625 22:04:10 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 07 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=7 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 07 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=7 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 0 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 0 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@362 -- $ decimal rc2 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@350 -- $ local d=rc2 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@351 -- $ [[ rc2 =~ ^[0-9]+$ ]] 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc2 =~ ^0x ]] 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc2 =~ ^[a-f0-9]+$ ]] 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@363 -- $ decimal '' 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@350 -- $ local d= 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@351 -- $ [[ '' =~ ^[0-9]+$ ]] 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^0x ]] 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^[a-f0-9]+$ ]] 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@367 -- $ [[ 24 7 0 0 == \2\4\ \7\ \0\ \0 ]] 00:02:38.626 22:04:10 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:38.626 22:04:10 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:38.626 22:04:10 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:02:38.626 22:04:10 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:38.626 22:04:10 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:38.626 22:04:10 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:43.912 The Meson build system 00:02:43.912 Version: 1.3.1 00:02:43.912 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:43.912 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:43.912 Build type: native build 00:02:43.912 Program cat found: YES (/usr/bin/cat) 00:02:43.912 Project name: DPDK 00:02:43.912 Project version: 24.07.0-rc2 00:02:43.912 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:43.912 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:43.912 Host machine cpu family: x86_64 00:02:43.912 Host machine cpu: x86_64 00:02:43.912 Message: ## Building in Developer Mode ## 00:02:43.912 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:43.912 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:43.912 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:43.912 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:43.912 Program cat found: YES (/usr/bin/cat) 00:02:43.912 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:43.912 Compiler for C supports arguments -march=native: YES 00:02:43.912 Checking for size of "void *" : 8 00:02:43.912 Checking for size of "void *" : 8 (cached) 00:02:43.912 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:43.912 Library m found: YES 00:02:43.913 Library numa found: YES 00:02:43.913 Has header "numaif.h" : YES 00:02:43.913 Library fdt found: NO 00:02:43.913 Library execinfo found: NO 00:02:43.913 Has header "execinfo.h" : YES 00:02:43.913 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:43.913 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:43.913 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:43.913 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:43.913 Run-time dependency openssl found: YES 3.0.9 00:02:43.913 Run-time dependency libpcap found: YES 1.10.4 00:02:43.913 Has header "pcap.h" with dependency libpcap: YES 00:02:43.913 Compiler for C supports arguments -Wcast-qual: YES 00:02:43.913 Compiler for C supports arguments -Wdeprecated: YES 00:02:43.913 Compiler for C supports arguments -Wformat: YES 00:02:43.913 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:43.913 Compiler for C supports arguments -Wformat-security: NO 00:02:43.913 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:43.913 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:43.913 Compiler for C supports arguments -Wnested-externs: YES 00:02:43.913 Compiler for C supports arguments -Wold-style-definition: YES 00:02:43.913 Compiler for C supports arguments -Wpointer-arith: YES 00:02:43.913 Compiler for C supports arguments -Wsign-compare: YES 00:02:43.913 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:43.913 Compiler for C supports arguments -Wundef: YES 00:02:43.913 Compiler for C supports arguments -Wwrite-strings: YES 00:02:43.913 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:43.913 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:43.913 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:43.913 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:43.913 Program objdump found: YES (/usr/bin/objdump) 00:02:43.913 Compiler for C supports arguments -mavx512f: YES 00:02:43.913 Checking if "AVX512 checking" compiles: YES 00:02:43.913 Fetching value of define "__SSE4_2__" : 1 00:02:43.913 Fetching value of define "__AES__" : 1 00:02:43.913 Fetching value of define "__AVX__" : 1 00:02:43.913 Fetching value of define "__AVX2__" : 1 00:02:43.913 Fetching value of define "__AVX512BW__" : 1 00:02:43.913 Fetching value of define "__AVX512CD__" : 1 00:02:43.913 Fetching value of define "__AVX512DQ__" : 1 00:02:43.913 Fetching value of define "__AVX512F__" : 1 00:02:43.913 Fetching value of define "__AVX512VL__" : 1 00:02:43.913 Fetching value of define "__PCLMUL__" : 1 00:02:43.913 Fetching value of define "__RDRND__" : 1 00:02:43.913 Fetching value of define "__RDSEED__" : 1 00:02:43.913 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:43.913 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:43.913 Message: lib/log: Defining dependency "log" 00:02:43.913 Message: lib/kvargs: Defining dependency "kvargs" 00:02:43.913 Message: lib/argparse: Defining dependency "argparse" 00:02:43.913 Message: lib/telemetry: Defining dependency "telemetry" 00:02:43.913 Checking for function "getentropy" : NO 00:02:43.913 Message: lib/eal: Defining dependency "eal" 00:02:43.913 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:43.913 Message: lib/ring: Defining dependency "ring" 00:02:43.913 Message: lib/rcu: Defining dependency "rcu" 00:02:43.913 Message: lib/mempool: Defining dependency "mempool" 00:02:43.913 Message: lib/mbuf: Defining dependency "mbuf" 00:02:43.913 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:43.913 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.913 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.913 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.913 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:43.913 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:43.913 Compiler for C supports arguments -mpclmul: YES 00:02:43.913 Compiler for C supports arguments -maes: YES 00:02:43.913 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:43.913 Compiler for C supports arguments -mavx512bw: YES 00:02:43.913 Compiler for C supports arguments -mavx512dq: YES 00:02:43.913 Compiler for C supports arguments -mavx512vl: YES 00:02:43.913 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:43.913 Compiler for C supports arguments -mavx2: YES 00:02:43.913 Compiler for C supports arguments -mavx: YES 00:02:43.913 Message: lib/net: Defining dependency "net" 00:02:43.913 Message: lib/meter: Defining dependency "meter" 00:02:43.913 Message: lib/ethdev: Defining dependency "ethdev" 00:02:43.913 Message: lib/pci: Defining dependency "pci" 00:02:43.913 Message: lib/cmdline: Defining dependency "cmdline" 00:02:43.913 Message: lib/metrics: Defining dependency "metrics" 00:02:43.913 Message: lib/hash: Defining dependency "hash" 00:02:43.913 Message: lib/timer: Defining dependency "timer" 00:02:43.913 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.913 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:43.913 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:43.913 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.913 Message: lib/acl: Defining dependency "acl" 00:02:43.913 Message: lib/bbdev: Defining dependency "bbdev" 00:02:43.913 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:43.913 Run-time dependency libelf found: YES 0.190 00:02:43.913 Message: lib/bpf: Defining dependency "bpf" 00:02:43.913 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:43.913 Message: lib/compressdev: Defining dependency "compressdev" 00:02:43.913 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:43.913 Message: lib/distributor: Defining dependency "distributor" 00:02:43.913 Message: lib/dmadev: Defining dependency "dmadev" 00:02:43.913 Message: lib/efd: Defining dependency "efd" 00:02:43.913 Message: lib/eventdev: Defining dependency "eventdev" 00:02:43.913 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:43.913 Message: lib/gpudev: Defining dependency "gpudev" 00:02:43.913 Message: lib/gro: Defining dependency "gro" 00:02:43.913 Message: lib/gso: Defining dependency "gso" 00:02:43.913 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:43.913 Message: lib/jobstats: Defining dependency "jobstats" 00:02:43.913 Message: lib/latencystats: Defining dependency "latencystats" 00:02:43.913 Message: lib/lpm: Defining dependency "lpm" 00:02:43.913 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.913 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.913 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:43.913 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:43.913 Message: lib/member: Defining dependency "member" 00:02:43.913 Message: lib/pcapng: Defining dependency "pcapng" 00:02:43.913 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:43.913 Message: lib/power: Defining dependency "power" 00:02:43.913 Message: lib/rawdev: Defining dependency "rawdev" 00:02:43.913 Message: lib/regexdev: Defining dependency "regexdev" 00:02:43.913 Message: lib/mldev: Defining dependency "mldev" 00:02:43.913 Message: lib/rib: Defining dependency "rib" 00:02:43.913 Message: lib/reorder: Defining dependency "reorder" 00:02:43.913 Message: lib/sched: Defining dependency "sched" 00:02:43.913 Message: lib/security: Defining dependency "security" 00:02:43.913 Message: lib/stack: Defining dependency "stack" 00:02:43.913 Has header "linux/userfaultfd.h" : YES 00:02:43.913 Has header "linux/vduse.h" : YES 00:02:43.913 Message: lib/vhost: Defining dependency "vhost" 00:02:43.913 Message: lib/ipsec: Defining dependency "ipsec" 00:02:43.913 Message: lib/pdcp: Defining dependency "pdcp" 00:02:43.913 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.913 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.913 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.913 Message: lib/fib: Defining dependency "fib" 00:02:43.913 Message: lib/port: Defining dependency "port" 00:02:43.913 Message: lib/pdump: Defining dependency "pdump" 00:02:43.913 Message: lib/table: Defining dependency "table" 00:02:43.913 Message: lib/pipeline: Defining dependency "pipeline" 00:02:43.913 Message: lib/graph: Defining dependency "graph" 00:02:43.913 Message: lib/node: Defining dependency "node" 00:02:43.913 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:43.913 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:43.913 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:45.816 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:45.816 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:45.816 Compiler for C supports arguments -Wno-unused-value: YES 00:02:45.816 Compiler for C supports arguments -Wno-format: YES 00:02:45.816 Compiler for C supports arguments -Wno-format-security: YES 00:02:45.816 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:45.816 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:45.816 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:45.816 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:45.816 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:45.816 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:45.816 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:45.816 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:45.816 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:45.816 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:45.816 Has header "sys/epoll.h" : YES 00:02:45.816 Program doxygen found: YES (/usr/bin/doxygen) 00:02:45.816 Configuring doxy-api-html.conf using configuration 00:02:45.816 Configuring doxy-api-man.conf using configuration 00:02:45.816 Program mandb found: YES (/usr/bin/mandb) 00:02:45.816 Program sphinx-build found: NO 00:02:45.816 Configuring rte_build_config.h using configuration 00:02:45.816 Message: 00:02:45.816 ================= 00:02:45.816 Applications Enabled 00:02:45.816 ================= 00:02:45.816 00:02:45.816 apps: 00:02:45.816 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:45.816 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:45.816 test-pmd, test-regex, test-sad, test-security-perf, 00:02:45.816 00:02:45.816 Message: 00:02:45.816 ================= 00:02:45.816 Libraries Enabled 00:02:45.816 ================= 00:02:45.816 00:02:45.816 libs: 00:02:45.816 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:45.816 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:45.816 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:45.816 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:45.816 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:45.816 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:45.816 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:45.816 graph, node, 00:02:45.816 00:02:45.816 Message: 00:02:45.816 =============== 00:02:45.816 Drivers Enabled 00:02:45.816 =============== 00:02:45.816 00:02:45.816 common: 00:02:45.816 00:02:45.816 bus: 00:02:45.816 pci, vdev, 00:02:45.816 mempool: 00:02:45.816 ring, 00:02:45.816 dma: 00:02:45.816 00:02:45.816 net: 00:02:45.816 i40e, 00:02:45.816 raw: 00:02:45.816 00:02:45.816 crypto: 00:02:45.816 00:02:45.816 compress: 00:02:45.816 00:02:45.816 regex: 00:02:45.816 00:02:45.816 ml: 00:02:45.816 00:02:45.816 vdpa: 00:02:45.816 00:02:45.816 event: 00:02:45.816 00:02:45.816 baseband: 00:02:45.816 00:02:45.816 gpu: 00:02:45.816 00:02:45.816 00:02:45.816 Message: 00:02:45.816 ================= 00:02:45.816 Content Skipped 00:02:45.816 ================= 00:02:45.816 00:02:45.816 apps: 00:02:45.816 00:02:45.816 libs: 00:02:45.816 00:02:45.816 drivers: 00:02:45.816 common/cpt: not in enabled drivers build config 00:02:45.816 common/dpaax: not in enabled drivers build config 00:02:45.816 common/iavf: not in enabled drivers build config 00:02:45.816 common/idpf: not in enabled drivers build config 00:02:45.816 common/ionic: not in enabled drivers build config 00:02:45.816 common/mvep: not in enabled drivers build config 00:02:45.816 common/octeontx: not in enabled drivers build config 00:02:45.816 bus/auxiliary: not in enabled drivers build config 00:02:45.816 bus/cdx: not in enabled drivers build config 00:02:45.816 bus/dpaa: not in enabled drivers build config 00:02:45.816 bus/fslmc: not in enabled drivers build config 00:02:45.816 bus/ifpga: not in enabled drivers build config 00:02:45.816 bus/platform: not in enabled drivers build config 00:02:45.816 bus/uacce: not in enabled drivers build config 00:02:45.816 bus/vmbus: not in enabled drivers build config 00:02:45.816 common/cnxk: not in enabled drivers build config 00:02:45.816 common/mlx5: not in enabled drivers build config 00:02:45.816 common/nfp: not in enabled drivers build config 00:02:45.816 common/nitrox: not in enabled drivers build config 00:02:45.816 common/qat: not in enabled drivers build config 00:02:45.816 common/sfc_efx: not in enabled drivers build config 00:02:45.816 mempool/bucket: not in enabled drivers build config 00:02:45.816 mempool/cnxk: not in enabled drivers build config 00:02:45.816 mempool/dpaa: not in enabled drivers build config 00:02:45.816 mempool/dpaa2: not in enabled drivers build config 00:02:45.816 mempool/octeontx: not in enabled drivers build config 00:02:45.816 mempool/stack: not in enabled drivers build config 00:02:45.816 dma/cnxk: not in enabled drivers build config 00:02:45.816 dma/dpaa: not in enabled drivers build config 00:02:45.816 dma/dpaa2: not in enabled drivers build config 00:02:45.816 dma/hisilicon: not in enabled drivers build config 00:02:45.816 dma/idxd: not in enabled drivers build config 00:02:45.816 dma/ioat: not in enabled drivers build config 00:02:45.816 dma/odm: not in enabled drivers build config 00:02:45.816 dma/skeleton: not in enabled drivers build config 00:02:45.816 net/af_packet: not in enabled drivers build config 00:02:45.816 net/af_xdp: not in enabled drivers build config 00:02:45.816 net/ark: not in enabled drivers build config 00:02:45.816 net/atlantic: not in enabled drivers build config 00:02:45.816 net/avp: not in enabled drivers build config 00:02:45.816 net/axgbe: not in enabled drivers build config 00:02:45.816 net/bnx2x: not in enabled drivers build config 00:02:45.816 net/bnxt: not in enabled drivers build config 00:02:45.816 net/bonding: not in enabled drivers build config 00:02:45.816 net/cnxk: not in enabled drivers build config 00:02:45.816 net/cpfl: not in enabled drivers build config 00:02:45.816 net/cxgbe: not in enabled drivers build config 00:02:45.816 net/dpaa: not in enabled drivers build config 00:02:45.816 net/dpaa2: not in enabled drivers build config 00:02:45.816 net/e1000: not in enabled drivers build config 00:02:45.816 net/ena: not in enabled drivers build config 00:02:45.816 net/enetc: not in enabled drivers build config 00:02:45.816 net/enetfec: not in enabled drivers build config 00:02:45.816 net/enic: not in enabled drivers build config 00:02:45.816 net/failsafe: not in enabled drivers build config 00:02:45.816 net/fm10k: not in enabled drivers build config 00:02:45.816 net/gve: not in enabled drivers build config 00:02:45.816 net/hinic: not in enabled drivers build config 00:02:45.816 net/hns3: not in enabled drivers build config 00:02:45.816 net/iavf: not in enabled drivers build config 00:02:45.816 net/ice: not in enabled drivers build config 00:02:45.816 net/idpf: not in enabled drivers build config 00:02:45.816 net/igc: not in enabled drivers build config 00:02:45.816 net/ionic: not in enabled drivers build config 00:02:45.816 net/ipn3ke: not in enabled drivers build config 00:02:45.816 net/ixgbe: not in enabled drivers build config 00:02:45.816 net/mana: not in enabled drivers build config 00:02:45.816 net/memif: not in enabled drivers build config 00:02:45.816 net/mlx4: not in enabled drivers build config 00:02:45.816 net/mlx5: not in enabled drivers build config 00:02:45.816 net/mvneta: not in enabled drivers build config 00:02:45.816 net/mvpp2: not in enabled drivers build config 00:02:45.816 net/netvsc: not in enabled drivers build config 00:02:45.816 net/nfb: not in enabled drivers build config 00:02:45.816 net/nfp: not in enabled drivers build config 00:02:45.816 net/ngbe: not in enabled drivers build config 00:02:45.816 net/ntnic: not in enabled drivers build config 00:02:45.816 net/null: not in enabled drivers build config 00:02:45.816 net/octeontx: not in enabled drivers build config 00:02:45.816 net/octeon_ep: not in enabled drivers build config 00:02:45.816 net/pcap: not in enabled drivers build config 00:02:45.816 net/pfe: not in enabled drivers build config 00:02:45.816 net/qede: not in enabled drivers build config 00:02:45.816 net/ring: not in enabled drivers build config 00:02:45.816 net/sfc: not in enabled drivers build config 00:02:45.816 net/softnic: not in enabled drivers build config 00:02:45.816 net/tap: not in enabled drivers build config 00:02:45.816 net/thunderx: not in enabled drivers build config 00:02:45.816 net/txgbe: not in enabled drivers build config 00:02:45.816 net/vdev_netvsc: not in enabled drivers build config 00:02:45.816 net/vhost: not in enabled drivers build config 00:02:45.816 net/virtio: not in enabled drivers build config 00:02:45.816 net/vmxnet3: not in enabled drivers build config 00:02:45.816 raw/cnxk_bphy: not in enabled drivers build config 00:02:45.816 raw/cnxk_gpio: not in enabled drivers build config 00:02:45.816 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:45.816 raw/ifpga: not in enabled drivers build config 00:02:45.816 raw/ntb: not in enabled drivers build config 00:02:45.817 raw/skeleton: not in enabled drivers build config 00:02:45.817 crypto/armv8: not in enabled drivers build config 00:02:45.817 crypto/bcmfs: not in enabled drivers build config 00:02:45.817 crypto/caam_jr: not in enabled drivers build config 00:02:45.817 crypto/ccp: not in enabled drivers build config 00:02:45.817 crypto/cnxk: not in enabled drivers build config 00:02:45.817 crypto/dpaa_sec: not in enabled drivers build config 00:02:45.817 crypto/dpaa2_sec: not in enabled drivers build config 00:02:45.817 crypto/ionic: not in enabled drivers build config 00:02:45.817 crypto/ipsec_mb: not in enabled drivers build config 00:02:45.817 crypto/mlx5: not in enabled drivers build config 00:02:45.817 crypto/mvsam: not in enabled drivers build config 00:02:45.817 crypto/nitrox: not in enabled drivers build config 00:02:45.817 crypto/null: not in enabled drivers build config 00:02:45.817 crypto/octeontx: not in enabled drivers build config 00:02:45.817 crypto/openssl: not in enabled drivers build config 00:02:45.817 crypto/scheduler: not in enabled drivers build config 00:02:45.817 crypto/uadk: not in enabled drivers build config 00:02:45.817 crypto/virtio: not in enabled drivers build config 00:02:45.817 compress/isal: not in enabled drivers build config 00:02:45.817 compress/mlx5: not in enabled drivers build config 00:02:45.817 compress/nitrox: not in enabled drivers build config 00:02:45.817 compress/octeontx: not in enabled drivers build config 00:02:45.817 compress/uadk: not in enabled drivers build config 00:02:45.817 compress/zlib: not in enabled drivers build config 00:02:45.817 regex/mlx5: not in enabled drivers build config 00:02:45.817 regex/cn9k: not in enabled drivers build config 00:02:45.817 ml/cnxk: not in enabled drivers build config 00:02:45.817 vdpa/ifc: not in enabled drivers build config 00:02:45.817 vdpa/mlx5: not in enabled drivers build config 00:02:45.817 vdpa/nfp: not in enabled drivers build config 00:02:45.817 vdpa/sfc: not in enabled drivers build config 00:02:45.817 event/cnxk: not in enabled drivers build config 00:02:45.817 event/dlb2: not in enabled drivers build config 00:02:45.817 event/dpaa: not in enabled drivers build config 00:02:45.817 event/dpaa2: not in enabled drivers build config 00:02:45.817 event/dsw: not in enabled drivers build config 00:02:45.817 event/opdl: not in enabled drivers build config 00:02:45.817 event/skeleton: not in enabled drivers build config 00:02:45.817 event/sw: not in enabled drivers build config 00:02:45.817 event/octeontx: not in enabled drivers build config 00:02:45.817 baseband/acc: not in enabled drivers build config 00:02:45.817 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:45.817 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:45.817 baseband/la12xx: not in enabled drivers build config 00:02:45.817 baseband/null: not in enabled drivers build config 00:02:45.817 baseband/turbo_sw: not in enabled drivers build config 00:02:45.817 gpu/cuda: not in enabled drivers build config 00:02:45.817 00:02:45.817 00:02:45.817 Build targets in project: 221 00:02:45.817 00:02:45.817 DPDK 24.07.0-rc2 00:02:45.817 00:02:45.817 User defined options 00:02:45.817 libdir : lib 00:02:45.817 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:45.817 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:45.817 c_link_args : 00:02:45.817 enable_docs : false 00:02:45.817 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:45.817 enable_kmods : false 00:02:45.817 machine : native 00:02:45.817 tests : false 00:02:45.817 00:02:45.817 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:45.817 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:45.817 22:04:17 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:45.817 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:46.076 [1/720] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:46.076 [2/720] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:46.076 [3/720] Linking static target lib/librte_kvargs.a 00:02:46.076 [4/720] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:46.076 [5/720] Linking static target lib/librte_log.a 00:02:46.076 [6/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:46.335 [7/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:46.335 [8/720] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:46.335 [9/720] Linking static target lib/librte_argparse.a 00:02:46.335 [10/720] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.335 [11/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:46.335 [12/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:46.335 [13/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:46.335 [14/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:46.335 [15/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:46.335 [16/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:46.335 [17/720] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.594 [18/720] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.594 [19/720] Linking target lib/librte_log.so.24.2 00:02:46.594 [20/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:46.594 [21/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:46.594 [22/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:46.853 [23/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:46.853 [24/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:46.853 [25/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:46.853 [26/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:46.853 [27/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:46.853 [28/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:46.853 [29/720] Linking static target lib/librte_telemetry.a 00:02:46.853 [30/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:47.112 [31/720] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:02:47.112 [32/720] Linking target lib/librte_kvargs.so.24.2 00:02:47.112 [33/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:47.112 [34/720] Linking target lib/librte_argparse.so.24.2 00:02:47.112 [35/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:47.112 [36/720] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:02:47.112 [37/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:47.112 [38/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:47.370 [39/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:47.370 [40/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:47.370 [41/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:47.370 [42/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:47.370 [43/720] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.370 [44/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:47.370 [45/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:47.371 [46/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:47.371 [47/720] Linking target lib/librte_telemetry.so.24.2 00:02:47.371 [48/720] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:02:47.629 [49/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:47.629 [50/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.629 [51/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:47.629 [52/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.629 [53/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:47.888 [54/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:47.888 [55/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:47.888 [56/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:47.888 [57/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:47.888 [58/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:47.888 [59/720] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:47.888 [60/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:47.888 [61/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:48.146 [62/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:48.146 [63/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:48.146 [64/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:48.146 [65/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:48.146 [66/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:48.146 [67/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:48.146 [68/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:48.146 [69/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:48.146 [70/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:48.404 [71/720] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:48.404 [72/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:48.663 [73/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:48.663 [74/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:48.663 [75/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:48.663 [76/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:48.663 [77/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.663 [78/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:48.663 [79/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:48.663 [80/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.663 [81/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.922 [82/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:48.922 [83/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:48.922 [84/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:48.922 [85/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:48.922 [86/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:48.922 [87/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:48.922 [88/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:49.180 [89/720] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:49.180 [90/720] Linking static target lib/librte_ring.a 00:02:49.180 [91/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.180 [92/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:49.180 [93/720] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.180 [94/720] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.180 [95/720] Linking static target lib/librte_eal.a 00:02:49.180 [96/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:49.438 [97/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:49.438 [98/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:49.698 [99/720] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.698 [100/720] Linking static target lib/librte_rcu.a 00:02:49.698 [101/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.698 [102/720] Linking static target lib/librte_mempool.a 00:02:49.698 [103/720] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:49.698 [104/720] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:49.698 [105/720] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:49.698 [106/720] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:49.698 [107/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:49.698 [108/720] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:49.698 [109/720] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:49.957 [110/720] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.957 [111/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:49.957 [112/720] Linking static target lib/librte_mbuf.a 00:02:49.957 [113/720] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:49.957 [114/720] Linking static target lib/librte_net.a 00:02:50.215 [115/720] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:50.215 [116/720] Linking static target lib/librte_meter.a 00:02:50.215 [117/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:50.215 [118/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.215 [119/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:50.215 [120/720] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.215 [121/720] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.473 [122/720] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.473 [123/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:50.473 [124/720] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.730 [125/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:50.730 [126/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:50.988 [127/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:50.988 [128/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:50.988 [129/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:51.247 [130/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:51.247 [131/720] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:51.247 [132/720] Linking static target lib/librte_pci.a 00:02:51.247 [133/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:51.247 [134/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:51.247 [135/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:51.247 [136/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.247 [137/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:51.247 [138/720] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.505 [139/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:51.505 [140/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:51.505 [141/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:51.505 [142/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:51.505 [143/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:51.505 [144/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:51.505 [145/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:51.505 [146/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:51.505 [147/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.505 [148/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.762 [149/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.762 [150/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:51.763 [151/720] Linking static target lib/librte_cmdline.a 00:02:52.031 [152/720] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:52.031 [153/720] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:52.031 [154/720] Linking static target lib/librte_metrics.a 00:02:52.031 [155/720] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:52.031 [156/720] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:52.031 [157/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:52.289 [158/720] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:52.289 [159/720] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.289 [160/720] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:52.548 [161/720] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.548 [162/720] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:52.548 [163/720] Linking static target lib/librte_timer.a 00:02:52.808 [164/720] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:52.808 [165/720] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:52.808 [166/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:52.808 [167/720] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:52.808 [168/720] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.376 [169/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:53.376 [170/720] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:53.376 [171/720] Linking static target lib/librte_bitratestats.a 00:02:53.376 [172/720] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:53.376 [173/720] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.635 [174/720] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:53.635 [175/720] Linking static target lib/librte_bbdev.a 00:02:53.635 [176/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:53.894 [177/720] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:53.894 [178/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:53.894 [179/720] Linking static target lib/librte_hash.a 00:02:53.894 [180/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:54.153 [181/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:54.153 [182/720] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.153 [183/720] Linking static target lib/librte_ethdev.a 00:02:54.153 [184/720] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:54.153 [185/720] Linking static target lib/acl/libavx2_tmp.a 00:02:54.153 [186/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:54.413 [187/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:54.413 [188/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:54.672 [189/720] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.672 [190/720] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:54.672 [191/720] Linking static target lib/librte_cfgfile.a 00:02:54.672 [192/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:54.672 [193/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:54.931 [194/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:54.931 [195/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:54.931 [196/720] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.931 [197/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:55.190 [198/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:55.190 [199/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:55.190 [200/720] Linking static target lib/librte_compressdev.a 00:02:55.190 [201/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:55.190 [202/720] Linking static target lib/librte_bpf.a 00:02:55.190 [203/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:55.190 [204/720] Linking static target lib/librte_acl.a 00:02:55.449 [205/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:55.449 [206/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:55.449 [207/720] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.449 [208/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:55.708 [209/720] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.708 [210/720] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.708 [211/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:55.708 [212/720] Linking static target lib/librte_distributor.a 00:02:55.708 [213/720] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.708 [214/720] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:55.967 [215/720] Linking target lib/librte_eal.so.24.2 00:02:55.967 [216/720] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:55.967 [217/720] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.967 [218/720] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:55.967 [219/720] Linking target lib/librte_ring.so.24.2 00:02:56.226 [220/720] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:56.226 [221/720] Linking target lib/librte_meter.so.24.2 00:02:56.226 [222/720] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:56.226 [223/720] Linking target lib/librte_rcu.so.24.2 00:02:56.226 [224/720] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:56.226 [225/720] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:56.226 [226/720] Linking target lib/librte_mempool.so.24.2 00:02:56.226 [227/720] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:56.226 [228/720] Linking target lib/librte_pci.so.24.2 00:02:56.485 [229/720] Linking target lib/librte_timer.so.24.2 00:02:56.485 [230/720] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:56.485 [231/720] Linking target lib/librte_mbuf.so.24.2 00:02:56.485 [232/720] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:56.485 [233/720] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:56.485 [234/720] Linking target lib/librte_acl.so.24.2 00:02:56.485 [235/720] Linking target lib/librte_cfgfile.so.24.2 00:02:56.485 [236/720] Linking static target lib/librte_dmadev.a 00:02:56.485 [237/720] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:56.485 [238/720] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:56.485 [239/720] Linking static target lib/librte_efd.a 00:02:56.485 [240/720] Linking target lib/librte_net.so.24.2 00:02:56.744 [241/720] Linking target lib/librte_bbdev.so.24.2 00:02:56.744 [242/720] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:02:56.744 [243/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:56.744 [244/720] Linking target lib/librte_compressdev.so.24.2 00:02:56.744 [245/720] Linking target lib/librte_distributor.so.24.2 00:02:56.744 [246/720] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:56.744 [247/720] Linking target lib/librte_cmdline.so.24.2 00:02:56.744 [248/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:56.744 [249/720] Linking target lib/librte_hash.so.24.2 00:02:57.004 [250/720] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.004 [251/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:57.004 [252/720] Linking static target lib/librte_cryptodev.a 00:02:57.004 [253/720] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:57.004 [254/720] Linking target lib/librte_efd.so.24.2 00:02:57.004 [255/720] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.004 [256/720] Linking target lib/librte_dmadev.so.24.2 00:02:57.004 [257/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:57.004 [258/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:57.263 [259/720] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:57.522 [260/720] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:57.522 [261/720] Linking static target lib/librte_dispatcher.a 00:02:57.522 [262/720] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:57.522 [263/720] Linking static target lib/librte_gpudev.a 00:02:57.522 [264/720] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:57.522 [265/720] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:57.522 [266/720] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:57.780 [267/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:58.039 [268/720] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.039 [269/720] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.298 [270/720] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:58.298 [271/720] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:58.298 [272/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:58.298 [273/720] Linking target lib/librte_cryptodev.so.24.2 00:02:58.298 [274/720] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:58.298 [275/720] Linking static target lib/librte_gro.a 00:02:58.298 [276/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:58.298 [277/720] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:58.298 [278/720] Linking static target lib/librte_eventdev.a 00:02:58.298 [279/720] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:58.298 [280/720] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:58.298 [281/720] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.559 [282/720] Linking target lib/librte_gpudev.so.24.2 00:02:58.559 [283/720] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.559 [284/720] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:58.559 [285/720] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:58.559 [286/720] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:58.839 [287/720] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:58.839 [288/720] Linking static target lib/librte_gso.a 00:02:58.839 [289/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:58.839 [290/720] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.839 [291/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:58.839 [292/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:59.105 [293/720] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:59.105 [294/720] Linking static target lib/librte_jobstats.a 00:02:59.105 [295/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:59.105 [296/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:59.105 [297/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:59.106 [298/720] Linking static target lib/librte_ip_frag.a 00:02:59.365 [299/720] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.365 [300/720] Linking target lib/librte_jobstats.so.24.2 00:02:59.365 [301/720] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:59.365 [302/720] Linking static target lib/librte_latencystats.a 00:02:59.365 [303/720] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.365 [304/720] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:59.365 [305/720] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.365 [306/720] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:59.365 [307/720] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:59.365 [308/720] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:59.624 [309/720] Linking target lib/librte_ethdev.so.24.2 00:02:59.624 [310/720] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:59.624 [311/720] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.624 [312/720] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:59.624 [313/720] Linking target lib/librte_metrics.so.24.2 00:02:59.624 [314/720] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:59.883 [315/720] Linking target lib/librte_bpf.so.24.2 00:02:59.883 [316/720] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:59.883 [317/720] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:02:59.883 [318/720] Linking target lib/librte_gro.so.24.2 00:02:59.883 [319/720] Linking target lib/librte_bitratestats.so.24.2 00:02:59.883 [320/720] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:02:59.883 [321/720] Linking target lib/librte_ip_frag.so.24.2 00:02:59.883 [322/720] Linking target lib/librte_gso.so.24.2 00:02:59.883 [323/720] Linking static target lib/librte_lpm.a 00:02:59.883 [324/720] Linking target lib/librte_latencystats.so.24.2 00:02:59.883 [325/720] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:59.883 [326/720] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:59.883 [327/720] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:00.142 [328/720] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:03:00.142 [329/720] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:00.142 [330/720] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:00.142 [331/720] Linking static target lib/librte_pcapng.a 00:03:00.142 [332/720] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:00.142 [333/720] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.401 [334/720] Linking target lib/librte_lpm.so.24.2 00:03:00.401 [335/720] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.401 [336/720] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:00.401 [337/720] Linking target lib/librte_pcapng.so.24.2 00:03:00.401 [338/720] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:03:00.401 [339/720] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:00.401 [340/720] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:00.401 [341/720] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:00.401 [342/720] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:03:00.401 [343/720] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.660 [344/720] Linking target lib/librte_eventdev.so.24.2 00:03:00.660 [345/720] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:00.660 [346/720] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:03:00.660 [347/720] Linking target lib/librte_dispatcher.so.24.2 00:03:00.660 [348/720] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:00.660 [349/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:00.660 [350/720] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:00.919 [351/720] Linking static target lib/librte_power.a 00:03:00.919 [352/720] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:00.919 [353/720] Linking static target lib/librte_regexdev.a 00:03:00.919 [354/720] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:00.919 [355/720] Linking static target lib/librte_rawdev.a 00:03:00.919 [356/720] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:00.919 [357/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:01.179 [358/720] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:01.179 [359/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:01.179 [360/720] Linking static target lib/librte_member.a 00:03:01.179 [361/720] Linking static target lib/librte_mldev.a 00:03:01.179 [362/720] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:01.438 [363/720] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:01.438 [364/720] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:01.438 [365/720] Linking static target lib/librte_reorder.a 00:03:01.438 [366/720] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:01.438 [367/720] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.438 [368/720] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.438 [369/720] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.438 [370/720] Linking target lib/librte_member.so.24.2 00:03:01.438 [371/720] Linking target lib/librte_rawdev.so.24.2 00:03:01.438 [372/720] Linking target lib/librte_power.so.24.2 00:03:01.438 [373/720] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:01.438 [374/720] Linking static target lib/librte_rib.a 00:03:01.697 [375/720] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.697 [376/720] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:01.697 [377/720] Linking target lib/librte_regexdev.so.24.2 00:03:01.697 [378/720] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.697 [379/720] Linking target lib/librte_reorder.so.24.2 00:03:01.697 [380/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:01.697 [381/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:01.697 [382/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:01.697 [383/720] Linking static target lib/librte_stack.a 00:03:01.956 [384/720] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:03:01.956 [385/720] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.956 [386/720] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:01.956 [387/720] Linking static target lib/librte_security.a 00:03:01.956 [388/720] Linking target lib/librte_rib.so.24.2 00:03:01.956 [389/720] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.956 [390/720] Linking target lib/librte_stack.so.24.2 00:03:02.215 [391/720] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:02.215 [392/720] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:03:02.215 [393/720] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:02.215 [394/720] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:02.474 [395/720] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.474 [396/720] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:02.474 [397/720] Linking static target lib/librte_sched.a 00:03:02.474 [398/720] Linking target lib/librte_security.so.24.2 00:03:02.474 [399/720] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:02.474 [400/720] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.474 [401/720] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:03:02.474 [402/720] Linking target lib/librte_mldev.so.24.2 00:03:02.733 [403/720] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:02.733 [404/720] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.733 [405/720] Linking target lib/librte_sched.so.24.2 00:03:02.733 [406/720] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:02.992 [407/720] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:03:02.992 [408/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:03.250 [409/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:03.250 [410/720] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:03.250 [411/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:03.250 [412/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:03.508 [413/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:03.508 [414/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:03.508 [415/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:03.767 [416/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:03.767 [417/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:04.026 [418/720] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:04.026 [419/720] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:04.026 [420/720] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:04.026 [421/720] Linking static target lib/librte_ipsec.a 00:03:04.026 [422/720] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:04.026 [423/720] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:03:04.026 [424/720] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:04.284 [425/720] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.284 [426/720] Linking target lib/librte_ipsec.so.24.2 00:03:04.284 [427/720] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:04.284 [428/720] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:04.284 [429/720] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:03:04.543 [430/720] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:04.543 [431/720] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:04.543 [432/720] Linking static target lib/librte_fib.a 00:03:04.802 [433/720] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:04.802 [434/720] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:04.802 [435/720] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.802 [436/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:05.061 [437/720] Linking static target lib/librte_pdcp.a 00:03:05.061 [438/720] Linking target lib/librte_fib.so.24.2 00:03:05.061 [439/720] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:05.061 [440/720] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:05.061 [441/720] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:05.319 [442/720] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.319 [443/720] Linking target lib/librte_pdcp.so.24.2 00:03:05.578 [444/720] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:05.578 [445/720] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:05.578 [446/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:05.578 [447/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:05.578 [448/720] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:05.578 [449/720] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:05.837 [450/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:05.837 [451/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:06.096 [452/720] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:06.096 [453/720] Linking static target lib/librte_port.a 00:03:06.096 [454/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:06.096 [455/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:06.096 [456/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:06.096 [457/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:06.355 [458/720] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:06.355 [459/720] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:06.355 [460/720] Linking static target lib/librte_pdump.a 00:03:06.355 [461/720] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:06.614 [462/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:06.614 [463/720] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.614 [464/720] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.614 [465/720] Linking target lib/librte_port.so.24.2 00:03:06.614 [466/720] Linking target lib/librte_pdump.so.24.2 00:03:06.874 [467/720] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:03:06.874 [468/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:06.874 [469/720] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:03:06.874 [470/720] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:06.874 [471/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:07.133 [472/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:07.133 [473/720] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:07.133 [474/720] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:07.393 [475/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:07.393 [476/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:07.393 [477/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:07.393 [478/720] Linking static target lib/librte_table.a 00:03:07.653 [479/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:07.653 [480/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:07.913 [481/720] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:08.172 [482/720] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:08.172 [483/720] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.172 [484/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:08.172 [485/720] Linking target lib/librte_table.so.24.2 00:03:08.172 [486/720] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:08.172 [487/720] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:08.432 [488/720] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:03:08.691 [489/720] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:08.691 [490/720] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:08.691 [491/720] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:08.691 [492/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:08.691 [493/720] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:08.951 [494/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:08.951 [495/720] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:08.951 [496/720] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:08.951 [497/720] Linking static target lib/librte_graph.a 00:03:09.210 [498/720] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:09.210 [499/720] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:09.210 [500/720] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:09.470 [501/720] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:09.470 [502/720] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:09.729 [503/720] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.729 [504/720] Linking target lib/librte_graph.so.24.2 00:03:09.729 [505/720] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:03:09.729 [506/720] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:09.729 [507/720] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:09.989 [508/720] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:09.989 [509/720] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:09.989 [510/720] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:09.989 [511/720] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:09.989 [512/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:10.249 [513/720] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:10.249 [514/720] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:10.509 [515/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:10.509 [516/720] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:10.509 [517/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:10.509 [518/720] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:10.509 [519/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:10.509 [520/720] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:10.509 [521/720] Linking static target lib/librte_node.a 00:03:10.509 [522/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:10.768 [523/720] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.768 [524/720] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:10.768 [525/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:10.768 [526/720] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:10.768 [527/720] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:11.028 [528/720] Linking target lib/librte_node.so.24.2 00:03:11.028 [529/720] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:11.028 [530/720] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:11.028 [531/720] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:11.028 [532/720] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:11.028 [533/720] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:11.028 [534/720] Linking static target drivers/librte_bus_pci.a 00:03:11.028 [535/720] Linking static target drivers/librte_bus_vdev.a 00:03:11.288 [536/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:11.288 [537/720] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:11.288 [538/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:11.288 [539/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:11.288 [540/720] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.288 [541/720] Linking target drivers/librte_bus_vdev.so.24.2 00:03:11.546 [542/720] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:11.546 [543/720] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:11.546 [544/720] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:03:11.546 [545/720] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.546 [546/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:11.547 [547/720] Linking target drivers/librte_bus_pci.so.24.2 00:03:11.547 [548/720] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:11.547 [549/720] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:11.547 [550/720] Linking static target drivers/librte_mempool_ring.a 00:03:11.547 [551/720] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:11.808 [552/720] Linking target drivers/librte_mempool_ring.so.24.2 00:03:11.808 [553/720] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:03:11.808 [554/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:12.075 [555/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:12.334 [556/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:12.334 [557/720] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:12.594 [558/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:13.163 [559/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:13.163 [560/720] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:13.163 [561/720] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:13.163 [562/720] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:13.163 [563/720] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:13.163 [564/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:13.422 [565/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:13.422 [566/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:13.681 [567/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:13.681 [568/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:13.681 [569/720] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:13.941 [570/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:14.201 [571/720] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:14.201 [572/720] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:14.201 [573/720] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:14.460 [574/720] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:14.719 [575/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:14.719 [576/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:14.719 [577/720] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:14.719 [578/720] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:14.719 [579/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:14.719 [580/720] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:14.978 [581/720] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:14.979 [582/720] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:14.979 [583/720] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:15.238 [584/720] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:15.238 [585/720] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:15.238 [586/720] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:15.238 [587/720] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:15.238 [588/720] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:15.499 [589/720] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:15.499 [590/720] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:15.499 [591/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:15.759 [592/720] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:15.759 [593/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:15.759 [594/720] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:16.017 [595/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:16.017 [596/720] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:16.017 [597/720] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:16.017 [598/720] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:16.017 [599/720] Linking static target drivers/librte_net_i40e.a 00:03:16.017 [600/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:16.276 [601/720] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:16.276 [602/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:16.535 [603/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:16.535 [604/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:16.535 [605/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:16.535 [606/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:16.794 [607/720] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.794 [608/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:16.794 [609/720] Linking target drivers/librte_net_i40e.so.24.2 00:03:17.052 [610/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:17.052 [611/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:17.052 [612/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:17.310 [613/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:17.310 [614/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:17.310 [615/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:17.570 [616/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:17.570 [617/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:17.570 [618/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:17.570 [619/720] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:17.570 [620/720] Linking static target lib/librte_vhost.a 00:03:17.570 [621/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:17.829 [622/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:17.829 [623/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:17.829 [624/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:17.829 [625/720] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:18.088 [626/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:18.347 [627/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:18.347 [628/720] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:18.347 [629/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:18.347 [630/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:18.914 [631/720] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.914 [632/720] Linking target lib/librte_vhost.so.24.2 00:03:19.172 [633/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:19.172 [634/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:19.172 [635/720] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:19.172 [636/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:19.172 [637/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:19.431 [638/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:19.431 [639/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:19.431 [640/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:19.689 [641/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:19.689 [642/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:19.689 [643/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:19.689 [644/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:19.689 [645/720] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:19.689 [646/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:19.947 [647/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:19.947 [648/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:19.947 [649/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:19.947 [650/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:20.205 [651/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:20.205 [652/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:20.205 [653/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:20.205 [654/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:20.463 [655/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:20.463 [656/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:20.721 [657/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:20.721 [658/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:20.721 [659/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:20.721 [660/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:20.980 [661/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:20.980 [662/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:20.980 [663/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:20.980 [664/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:21.239 [665/720] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:21.239 [666/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:21.239 [667/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:21.239 [668/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:21.239 [669/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:21.497 [670/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:21.497 [671/720] Linking static target lib/librte_pipeline.a 00:03:21.497 [672/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:21.497 [673/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:21.756 [674/720] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:22.014 [675/720] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:22.014 [676/720] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:22.014 [677/720] Linking target app/dpdk-dumpcap 00:03:22.014 [678/720] Linking target app/dpdk-graph 00:03:22.273 [679/720] Linking target app/dpdk-pdump 00:03:22.532 [680/720] Linking target app/dpdk-proc-info 00:03:22.532 [681/720] Linking target app/dpdk-test-acl 00:03:22.532 [682/720] Linking target app/dpdk-test-bbdev 00:03:22.532 [683/720] Linking target app/dpdk-test-cmdline 00:03:22.532 [684/720] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:22.791 [685/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:22.791 [686/720] Linking target app/dpdk-test-compress-perf 00:03:23.049 [687/720] Linking target app/dpdk-test-crypto-perf 00:03:23.049 [688/720] Linking target app/dpdk-test-dma-perf 00:03:23.049 [689/720] Linking target app/dpdk-test-fib 00:03:23.049 [690/720] Linking target app/dpdk-test-eventdev 00:03:23.049 [691/720] Linking target app/dpdk-test-flow-perf 00:03:23.050 [692/720] Linking target app/dpdk-test-gpudev 00:03:23.308 [693/720] Linking target app/dpdk-test-mldev 00:03:23.566 [694/720] Linking target app/dpdk-test-pipeline 00:03:23.566 [695/720] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:23.566 [696/720] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:23.566 [697/720] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:23.566 [698/720] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:23.566 [699/720] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:23.825 [700/720] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:23.825 [701/720] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:23.825 [702/720] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:24.083 [703/720] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:24.083 [704/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:24.342 [705/720] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:24.342 [706/720] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.342 [707/720] Linking target lib/librte_pipeline.so.24.2 00:03:24.601 [708/720] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:24.601 [709/720] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:24.601 [710/720] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:24.860 [711/720] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:24.860 [712/720] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:24.860 [713/720] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:24.860 [714/720] Linking target app/dpdk-test-sad 00:03:25.119 [715/720] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:25.119 [716/720] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:25.119 [717/720] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:25.119 [718/720] Linking target app/dpdk-test-regex 00:03:25.685 [719/720] Linking target app/dpdk-test-security-perf 00:03:25.686 [720/720] Linking target app/dpdk-testpmd 00:03:25.686 22:04:57 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:25.686 22:04:57 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:25.686 22:04:57 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:25.686 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:25.686 [0/1] Installing files. 00:03:25.947 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:25.947 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.947 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.949 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.950 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.951 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.952 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.952 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.952 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.952 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.952 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.952 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.952 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:26.211 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:26.211 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:26.211 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:26.211 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:26.211 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:26.211 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:26.211 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.211 Installing lib/librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.471 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.471 Installing lib/librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.471 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing lib/librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing drivers/librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:26.472 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing drivers/librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:26.472 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing drivers/librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:26.472 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.472 Installing drivers/librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:26.472 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.472 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.473 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.474 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.734 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:26.735 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:26.735 Installing symlink pointing to librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:26.735 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:26.735 Installing symlink pointing to librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:26.735 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:26.735 Installing symlink pointing to librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.24 00:03:26.735 Installing symlink pointing to librte_argparse.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:26.735 Installing symlink pointing to librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:26.735 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:26.735 Installing symlink pointing to librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:26.735 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:26.735 Installing symlink pointing to librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:26.735 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:26.735 Installing symlink pointing to librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:26.735 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:26.735 Installing symlink pointing to librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:26.735 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:26.735 Installing symlink pointing to librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:26.735 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:26.735 Installing symlink pointing to librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:26.735 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:26.735 Installing symlink pointing to librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:26.735 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:26.735 Installing symlink pointing to librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:26.735 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:26.735 Installing symlink pointing to librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:26.735 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:26.735 Installing symlink pointing to librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:26.735 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:26.735 Installing symlink pointing to librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:26.735 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:26.735 Installing symlink pointing to librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:26.735 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:26.735 Installing symlink pointing to librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:26.735 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:26.735 Installing symlink pointing to librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:26.735 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:26.735 Installing symlink pointing to librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:26.735 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:26.735 Installing symlink pointing to librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:26.735 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:26.735 Installing symlink pointing to librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:26.735 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:26.735 Installing symlink pointing to librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:26.735 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:26.735 Installing symlink pointing to librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:26.735 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:26.735 Installing symlink pointing to librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:26.735 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:26.735 Installing symlink pointing to librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:26.735 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:26.736 Installing symlink pointing to librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:26.736 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:26.736 Installing symlink pointing to librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:26.736 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:26.736 Installing symlink pointing to librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:26.736 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:26.736 Installing symlink pointing to librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:26.736 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:26.736 Installing symlink pointing to librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:26.736 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:26.736 Installing symlink pointing to librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:26.736 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:26.736 Installing symlink pointing to librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:26.736 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:26.736 Installing symlink pointing to librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:26.736 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:26.736 Installing symlink pointing to librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:26.736 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:26.736 Installing symlink pointing to librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:26.736 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:26.736 Installing symlink pointing to librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:26.736 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:26.736 Installing symlink pointing to librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:26.736 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:26.736 Installing symlink pointing to librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:26.736 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:03:26.736 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:03:26.736 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:03:26.736 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:03:26.736 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:03:26.736 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:03:26.736 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:03:26.736 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:03:26.736 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:03:26.736 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:03:26.736 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:03:26.736 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:03:26.736 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:26.736 Installing symlink pointing to librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:26.736 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:26.736 Installing symlink pointing to librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:26.736 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:26.736 Installing symlink pointing to librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:26.736 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:26.736 Installing symlink pointing to librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:26.736 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:26.736 Installing symlink pointing to librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:26.736 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:26.736 Installing symlink pointing to librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:26.736 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:26.736 Installing symlink pointing to librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:26.736 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:26.736 Installing symlink pointing to librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:26.736 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:26.736 Installing symlink pointing to librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:26.736 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:26.736 Installing symlink pointing to librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:26.736 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:26.736 Installing symlink pointing to librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:26.736 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:26.736 Installing symlink pointing to librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:26.736 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:26.736 Installing symlink pointing to librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:26.736 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:26.736 Installing symlink pointing to librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:26.736 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:26.736 Installing symlink pointing to librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:26.736 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:26.736 Installing symlink pointing to librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:26.736 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:26.736 Installing symlink pointing to librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:26.736 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:26.736 Installing symlink pointing to librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:26.736 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:26.736 Installing symlink pointing to librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:26.736 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:26.736 Installing symlink pointing to librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:03:26.736 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:03:26.736 Installing symlink pointing to librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:03:26.736 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:03:26.736 Installing symlink pointing to librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:03:26.736 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:03:26.736 Installing symlink pointing to librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:03:26.736 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:03:26.736 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:03:26.736 22:04:58 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:03:26.736 22:04:58 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:26.736 00:03:26.736 real 0m48.149s 00:03:26.736 user 5m18.273s 00:03:26.736 sys 1m9.189s 00:03:26.736 22:04:58 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:26.736 ************************************ 00:03:26.736 END TEST build_native_dpdk 00:03:26.736 ************************************ 00:03:26.736 22:04:58 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:26.736 22:04:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:26.736 22:04:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:26.736 22:04:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:26.736 22:04:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:26.736 22:04:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:26.736 22:04:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:26.736 22:04:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:26.736 22:04:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:29.268 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:29.268 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.268 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:29.268 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:29.527 Using 'verbs' RDMA provider 00:03:45.841 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:00.731 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:00.731 Creating mk/config.mk...done. 00:04:00.731 Creating mk/cc.flags.mk...done. 00:04:00.731 Type 'make' to build. 00:04:00.731 22:05:32 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:00.731 22:05:32 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:04:00.731 22:05:32 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:00.731 22:05:32 -- common/autotest_common.sh@10 -- $ set +x 00:04:00.731 ************************************ 00:04:00.731 START TEST make 00:04:00.731 ************************************ 00:04:00.731 22:05:32 make -- common/autotest_common.sh@1123 -- $ make -j10 00:04:00.731 make[1]: Nothing to be done for 'all'. 00:04:27.269 CC lib/ut_mock/mock.o 00:04:27.269 CC lib/log/log.o 00:04:27.269 CC lib/ut/ut.o 00:04:27.269 CC lib/log/log_flags.o 00:04:27.269 CC lib/log/log_deprecated.o 00:04:27.269 LIB libspdk_ut_mock.a 00:04:27.269 LIB libspdk_ut.a 00:04:27.269 SO libspdk_ut_mock.so.6.0 00:04:27.269 LIB libspdk_log.a 00:04:27.269 SO libspdk_ut.so.2.0 00:04:27.269 SYMLINK libspdk_ut_mock.so 00:04:27.269 SO libspdk_log.so.7.0 00:04:27.269 SYMLINK libspdk_ut.so 00:04:27.269 SYMLINK libspdk_log.so 00:04:27.269 CC lib/dma/dma.o 00:04:27.269 CC lib/ioat/ioat.o 00:04:27.269 CC lib/util/base64.o 00:04:27.269 CC lib/util/bit_array.o 00:04:27.269 CXX lib/trace_parser/trace.o 00:04:27.269 CC lib/util/cpuset.o 00:04:27.269 CC lib/util/crc16.o 00:04:27.269 CC lib/util/crc32.o 00:04:27.269 CC lib/util/crc32c.o 00:04:27.269 CC lib/vfio_user/host/vfio_user_pci.o 00:04:27.269 CC lib/util/crc32_ieee.o 00:04:27.269 CC lib/util/crc64.o 00:04:27.269 CC lib/util/dif.o 00:04:27.269 LIB libspdk_dma.a 00:04:27.269 SO libspdk_dma.so.4.0 00:04:27.269 CC lib/vfio_user/host/vfio_user.o 00:04:27.269 CC lib/util/fd.o 00:04:27.269 CC lib/util/fd_group.o 00:04:27.269 LIB libspdk_ioat.a 00:04:27.269 CC lib/util/file.o 00:04:27.269 SO libspdk_ioat.so.7.0 00:04:27.269 CC lib/util/hexlify.o 00:04:27.269 SYMLINK libspdk_dma.so 00:04:27.269 CC lib/util/iov.o 00:04:27.269 CC lib/util/math.o 00:04:27.269 CC lib/util/net.o 00:04:27.269 SYMLINK libspdk_ioat.so 00:04:27.269 CC lib/util/pipe.o 00:04:27.269 LIB libspdk_vfio_user.a 00:04:27.269 CC lib/util/strerror_tls.o 00:04:27.269 CC lib/util/string.o 00:04:27.269 CC lib/util/uuid.o 00:04:27.269 SO libspdk_vfio_user.so.5.0 00:04:27.269 CC lib/util/xor.o 00:04:27.269 CC lib/util/zipf.o 00:04:27.269 SYMLINK libspdk_vfio_user.so 00:04:27.269 LIB libspdk_util.a 00:04:27.269 SO libspdk_util.so.10.0 00:04:27.269 LIB libspdk_trace_parser.a 00:04:27.269 SYMLINK libspdk_util.so 00:04:27.269 SO libspdk_trace_parser.so.5.0 00:04:27.269 SYMLINK libspdk_trace_parser.so 00:04:27.269 CC lib/json/json_parse.o 00:04:27.269 CC lib/rdma_utils/rdma_utils.o 00:04:27.269 CC lib/json/json_util.o 00:04:27.269 CC lib/json/json_write.o 00:04:27.269 CC lib/env_dpdk/env.o 00:04:27.269 CC lib/env_dpdk/memory.o 00:04:27.269 CC lib/conf/conf.o 00:04:27.269 CC lib/idxd/idxd.o 00:04:27.269 CC lib/rdma_provider/common.o 00:04:27.269 CC lib/vmd/vmd.o 00:04:27.269 LIB libspdk_conf.a 00:04:27.269 CC lib/idxd/idxd_user.o 00:04:27.269 CC lib/env_dpdk/pci.o 00:04:27.269 SO libspdk_conf.so.6.0 00:04:27.269 LIB libspdk_rdma_utils.a 00:04:27.269 LIB libspdk_json.a 00:04:27.269 SO libspdk_rdma_utils.so.1.0 00:04:27.269 SYMLINK libspdk_conf.so 00:04:27.269 SO libspdk_json.so.6.0 00:04:27.269 CC lib/idxd/idxd_kernel.o 00:04:27.269 SYMLINK libspdk_rdma_utils.so 00:04:27.269 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:27.269 SYMLINK libspdk_json.so 00:04:27.269 CC lib/env_dpdk/init.o 00:04:27.269 CC lib/env_dpdk/threads.o 00:04:27.269 CC lib/env_dpdk/pci_ioat.o 00:04:27.269 LIB libspdk_idxd.a 00:04:27.269 CC lib/env_dpdk/pci_virtio.o 00:04:27.269 SO libspdk_idxd.so.12.0 00:04:27.269 LIB libspdk_rdma_provider.a 00:04:27.269 CC lib/env_dpdk/pci_vmd.o 00:04:27.269 CC lib/env_dpdk/pci_idxd.o 00:04:27.269 CC lib/jsonrpc/jsonrpc_server.o 00:04:27.269 SO libspdk_rdma_provider.so.6.0 00:04:27.269 SYMLINK libspdk_idxd.so 00:04:27.269 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:27.269 CC lib/env_dpdk/pci_event.o 00:04:27.269 CC lib/jsonrpc/jsonrpc_client.o 00:04:27.269 CC lib/vmd/led.o 00:04:27.269 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:27.269 CC lib/env_dpdk/sigbus_handler.o 00:04:27.269 SYMLINK libspdk_rdma_provider.so 00:04:27.269 CC lib/env_dpdk/pci_dpdk.o 00:04:27.269 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:27.269 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:27.269 LIB libspdk_jsonrpc.a 00:04:27.269 SO libspdk_jsonrpc.so.6.0 00:04:27.269 LIB libspdk_vmd.a 00:04:27.269 SO libspdk_vmd.so.6.0 00:04:27.269 SYMLINK libspdk_jsonrpc.so 00:04:27.269 SYMLINK libspdk_vmd.so 00:04:27.269 CC lib/rpc/rpc.o 00:04:27.269 LIB libspdk_rpc.a 00:04:27.269 LIB libspdk_env_dpdk.a 00:04:27.269 SO libspdk_rpc.so.6.0 00:04:27.269 SO libspdk_env_dpdk.so.15.0 00:04:27.269 SYMLINK libspdk_rpc.so 00:04:27.269 SYMLINK libspdk_env_dpdk.so 00:04:27.269 CC lib/notify/notify.o 00:04:27.269 CC lib/notify/notify_rpc.o 00:04:27.269 CC lib/keyring/keyring.o 00:04:27.269 CC lib/keyring/keyring_rpc.o 00:04:27.269 CC lib/trace/trace.o 00:04:27.269 CC lib/trace/trace_flags.o 00:04:27.269 CC lib/trace/trace_rpc.o 00:04:27.527 LIB libspdk_notify.a 00:04:27.527 SO libspdk_notify.so.6.0 00:04:27.527 LIB libspdk_trace.a 00:04:27.527 LIB libspdk_keyring.a 00:04:27.527 SO libspdk_trace.so.10.0 00:04:27.527 SO libspdk_keyring.so.1.0 00:04:27.527 SYMLINK libspdk_notify.so 00:04:27.527 SYMLINK libspdk_keyring.so 00:04:27.784 SYMLINK libspdk_trace.so 00:04:28.041 CC lib/sock/sock.o 00:04:28.041 CC lib/sock/sock_rpc.o 00:04:28.041 CC lib/thread/iobuf.o 00:04:28.041 CC lib/thread/thread.o 00:04:28.605 LIB libspdk_sock.a 00:04:28.605 SO libspdk_sock.so.10.0 00:04:28.605 SYMLINK libspdk_sock.so 00:04:29.170 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:29.170 CC lib/nvme/nvme_ns_cmd.o 00:04:29.170 CC lib/nvme/nvme_ctrlr.o 00:04:29.170 CC lib/nvme/nvme_fabric.o 00:04:29.170 CC lib/nvme/nvme.o 00:04:29.170 CC lib/nvme/nvme_pcie_common.o 00:04:29.170 CC lib/nvme/nvme_pcie.o 00:04:29.170 CC lib/nvme/nvme_ns.o 00:04:29.170 CC lib/nvme/nvme_qpair.o 00:04:29.735 CC lib/nvme/nvme_quirks.o 00:04:29.735 CC lib/nvme/nvme_transport.o 00:04:29.735 CC lib/nvme/nvme_discovery.o 00:04:29.735 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:29.735 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:29.992 LIB libspdk_thread.a 00:04:29.992 CC lib/nvme/nvme_tcp.o 00:04:29.992 SO libspdk_thread.so.10.1 00:04:30.250 SYMLINK libspdk_thread.so 00:04:30.250 CC lib/nvme/nvme_opal.o 00:04:30.250 CC lib/nvme/nvme_io_msg.o 00:04:30.250 CC lib/nvme/nvme_poll_group.o 00:04:30.250 CC lib/nvme/nvme_zns.o 00:04:30.507 CC lib/nvme/nvme_stubs.o 00:04:30.507 CC lib/blob/blobstore.o 00:04:30.507 CC lib/accel/accel.o 00:04:30.765 CC lib/init/json_config.o 00:04:30.765 CC lib/blob/request.o 00:04:30.765 CC lib/accel/accel_rpc.o 00:04:30.765 CC lib/nvme/nvme_auth.o 00:04:30.765 CC lib/nvme/nvme_cuse.o 00:04:31.022 CC lib/virtio/virtio.o 00:04:31.022 CC lib/nvme/nvme_rdma.o 00:04:31.022 CC lib/blob/zeroes.o 00:04:31.022 CC lib/blob/blob_bs_dev.o 00:04:31.285 CC lib/init/subsystem.o 00:04:31.285 CC lib/accel/accel_sw.o 00:04:31.285 CC lib/virtio/virtio_vhost_user.o 00:04:31.285 CC lib/virtio/virtio_vfio_user.o 00:04:31.285 CC lib/virtio/virtio_pci.o 00:04:31.285 CC lib/init/subsystem_rpc.o 00:04:31.566 CC lib/init/rpc.o 00:04:31.566 LIB libspdk_accel.a 00:04:31.566 SO libspdk_accel.so.16.0 00:04:31.566 SYMLINK libspdk_accel.so 00:04:31.566 LIB libspdk_virtio.a 00:04:31.566 SO libspdk_virtio.so.7.0 00:04:31.823 LIB libspdk_init.a 00:04:31.823 SO libspdk_init.so.5.0 00:04:31.823 SYMLINK libspdk_virtio.so 00:04:31.823 CC lib/bdev/bdev.o 00:04:31.823 CC lib/bdev/bdev_rpc.o 00:04:31.823 CC lib/bdev/part.o 00:04:31.823 CC lib/bdev/bdev_zone.o 00:04:31.823 CC lib/bdev/scsi_nvme.o 00:04:31.823 SYMLINK libspdk_init.so 00:04:32.080 LIB libspdk_nvme.a 00:04:32.080 CC lib/event/app.o 00:04:32.080 CC lib/event/log_rpc.o 00:04:32.080 CC lib/event/reactor.o 00:04:32.080 CC lib/event/app_rpc.o 00:04:32.080 CC lib/event/scheduler_static.o 00:04:32.338 SO libspdk_nvme.so.13.1 00:04:32.595 LIB libspdk_event.a 00:04:32.595 SO libspdk_event.so.14.0 00:04:32.595 SYMLINK libspdk_nvme.so 00:04:32.595 SYMLINK libspdk_event.so 00:04:33.161 LIB libspdk_blob.a 00:04:33.161 SO libspdk_blob.so.11.0 00:04:33.161 SYMLINK libspdk_blob.so 00:04:33.418 CC lib/lvol/lvol.o 00:04:33.418 CC lib/blobfs/tree.o 00:04:33.418 CC lib/blobfs/blobfs.o 00:04:34.351 LIB libspdk_blobfs.a 00:04:34.351 SO libspdk_blobfs.so.10.0 00:04:34.351 SYMLINK libspdk_blobfs.so 00:04:34.916 LIB libspdk_lvol.a 00:04:35.174 SO libspdk_lvol.so.10.0 00:04:35.174 SYMLINK libspdk_lvol.so 00:04:35.174 LIB libspdk_bdev.a 00:04:35.174 SO libspdk_bdev.so.16.0 00:04:35.432 SYMLINK libspdk_bdev.so 00:04:35.690 CC lib/nvmf/ctrlr.o 00:04:35.690 CC lib/nbd/nbd.o 00:04:35.690 CC lib/nvmf/ctrlr_discovery.o 00:04:35.690 CC lib/nbd/nbd_rpc.o 00:04:35.690 CC lib/ftl/ftl_core.o 00:04:35.690 CC lib/nvmf/ctrlr_bdev.o 00:04:35.690 CC lib/nvmf/subsystem.o 00:04:35.690 CC lib/ublk/ublk.o 00:04:35.690 CC lib/nvmf/nvmf.o 00:04:35.690 CC lib/scsi/dev.o 00:04:35.947 CC lib/scsi/lun.o 00:04:35.947 LIB libspdk_nbd.a 00:04:35.947 CC lib/scsi/port.o 00:04:35.947 SO libspdk_nbd.so.7.0 00:04:36.204 CC lib/ftl/ftl_init.o 00:04:36.204 SYMLINK libspdk_nbd.so 00:04:36.204 CC lib/scsi/scsi.o 00:04:36.204 CC lib/scsi/scsi_bdev.o 00:04:36.204 CC lib/scsi/scsi_pr.o 00:04:36.204 CC lib/ftl/ftl_layout.o 00:04:36.204 CC lib/ublk/ublk_rpc.o 00:04:36.204 CC lib/ftl/ftl_debug.o 00:04:36.462 CC lib/ftl/ftl_io.o 00:04:36.462 CC lib/ftl/ftl_sb.o 00:04:36.462 LIB libspdk_ublk.a 00:04:36.462 CC lib/ftl/ftl_l2p.o 00:04:36.462 SO libspdk_ublk.so.3.0 00:04:36.462 CC lib/ftl/ftl_l2p_flat.o 00:04:36.462 CC lib/ftl/ftl_nv_cache.o 00:04:36.462 SYMLINK libspdk_ublk.so 00:04:36.462 CC lib/ftl/ftl_band.o 00:04:36.462 CC lib/scsi/scsi_rpc.o 00:04:36.462 CC lib/ftl/ftl_band_ops.o 00:04:36.719 CC lib/ftl/ftl_writer.o 00:04:36.719 CC lib/scsi/task.o 00:04:36.719 CC lib/ftl/ftl_rq.o 00:04:36.719 CC lib/nvmf/nvmf_rpc.o 00:04:36.976 CC lib/ftl/ftl_reloc.o 00:04:36.976 CC lib/ftl/ftl_l2p_cache.o 00:04:36.976 CC lib/ftl/ftl_p2l.o 00:04:36.976 LIB libspdk_scsi.a 00:04:36.976 CC lib/nvmf/transport.o 00:04:36.976 CC lib/ftl/mngt/ftl_mngt.o 00:04:36.976 SO libspdk_scsi.so.9.0 00:04:36.976 SYMLINK libspdk_scsi.so 00:04:36.976 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:37.233 CC lib/nvmf/tcp.o 00:04:37.233 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:37.233 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:37.490 CC lib/iscsi/conn.o 00:04:37.490 CC lib/nvmf/stubs.o 00:04:37.490 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:37.490 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:37.490 CC lib/iscsi/init_grp.o 00:04:37.490 CC lib/iscsi/iscsi.o 00:04:37.490 CC lib/vhost/vhost.o 00:04:37.490 CC lib/iscsi/md5.o 00:04:37.747 CC lib/nvmf/mdns_server.o 00:04:37.747 CC lib/nvmf/rdma.o 00:04:37.747 CC lib/nvmf/auth.o 00:04:37.747 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:37.747 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:38.004 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:38.004 CC lib/iscsi/param.o 00:04:38.004 CC lib/iscsi/portal_grp.o 00:04:38.004 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:38.004 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:38.261 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:38.261 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:38.261 CC lib/iscsi/tgt_node.o 00:04:38.261 CC lib/iscsi/iscsi_subsystem.o 00:04:38.518 CC lib/iscsi/iscsi_rpc.o 00:04:38.518 CC lib/iscsi/task.o 00:04:38.518 CC lib/vhost/vhost_rpc.o 00:04:38.518 CC lib/ftl/utils/ftl_conf.o 00:04:38.518 CC lib/ftl/utils/ftl_md.o 00:04:38.518 CC lib/ftl/utils/ftl_mempool.o 00:04:38.775 CC lib/ftl/utils/ftl_bitmap.o 00:04:38.775 CC lib/ftl/utils/ftl_property.o 00:04:38.775 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:38.775 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:38.775 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:38.775 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:38.775 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:38.775 LIB libspdk_iscsi.a 00:04:38.775 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:39.033 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:39.033 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:39.033 SO libspdk_iscsi.so.8.0 00:04:39.033 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:39.033 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:39.033 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:39.033 CC lib/vhost/vhost_scsi.o 00:04:39.033 CC lib/vhost/vhost_blk.o 00:04:39.033 SYMLINK libspdk_iscsi.so 00:04:39.033 CC lib/ftl/base/ftl_base_dev.o 00:04:39.033 CC lib/ftl/base/ftl_base_bdev.o 00:04:39.033 CC lib/ftl/ftl_trace.o 00:04:39.289 CC lib/vhost/rte_vhost_user.o 00:04:39.545 LIB libspdk_ftl.a 00:04:39.545 LIB libspdk_nvmf.a 00:04:39.545 SO libspdk_nvmf.so.19.0 00:04:39.802 SO libspdk_ftl.so.9.0 00:04:39.802 SYMLINK libspdk_nvmf.so 00:04:40.059 SYMLINK libspdk_ftl.so 00:04:40.059 LIB libspdk_vhost.a 00:04:40.316 SO libspdk_vhost.so.8.0 00:04:40.316 SYMLINK libspdk_vhost.so 00:04:40.881 CC module/env_dpdk/env_dpdk_rpc.o 00:04:40.881 CC module/accel/iaa/accel_iaa.o 00:04:40.881 CC module/keyring/linux/keyring.o 00:04:40.881 CC module/accel/dsa/accel_dsa.o 00:04:40.881 CC module/accel/ioat/accel_ioat.o 00:04:40.881 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:40.881 CC module/accel/error/accel_error.o 00:04:40.881 CC module/keyring/file/keyring.o 00:04:40.881 CC module/blob/bdev/blob_bdev.o 00:04:40.881 LIB libspdk_env_dpdk_rpc.a 00:04:40.881 CC module/sock/posix/posix.o 00:04:40.881 SO libspdk_env_dpdk_rpc.so.6.0 00:04:40.881 SYMLINK libspdk_env_dpdk_rpc.so 00:04:40.881 CC module/keyring/linux/keyring_rpc.o 00:04:40.881 CC module/accel/dsa/accel_dsa_rpc.o 00:04:40.881 CC module/accel/ioat/accel_ioat_rpc.o 00:04:40.881 CC module/keyring/file/keyring_rpc.o 00:04:40.881 CC module/accel/iaa/accel_iaa_rpc.o 00:04:41.140 LIB libspdk_keyring_linux.a 00:04:41.140 LIB libspdk_accel_dsa.a 00:04:41.140 SO libspdk_keyring_linux.so.1.0 00:04:41.140 LIB libspdk_keyring_file.a 00:04:41.140 LIB libspdk_accel_ioat.a 00:04:41.140 SO libspdk_accel_dsa.so.5.0 00:04:41.140 LIB libspdk_accel_iaa.a 00:04:41.140 SO libspdk_keyring_file.so.1.0 00:04:41.140 SO libspdk_accel_ioat.so.6.0 00:04:41.140 SO libspdk_accel_iaa.so.3.0 00:04:41.140 SYMLINK libspdk_keyring_linux.so 00:04:41.140 SYMLINK libspdk_accel_dsa.so 00:04:41.140 SYMLINK libspdk_accel_ioat.so 00:04:41.140 CC module/accel/error/accel_error_rpc.o 00:04:41.140 SYMLINK libspdk_keyring_file.so 00:04:41.140 LIB libspdk_scheduler_dynamic.a 00:04:41.140 SYMLINK libspdk_accel_iaa.so 00:04:41.140 CC module/sock/uring/uring.o 00:04:41.140 SO libspdk_scheduler_dynamic.so.4.0 00:04:41.397 SYMLINK libspdk_scheduler_dynamic.so 00:04:41.397 LIB libspdk_accel_error.a 00:04:41.397 LIB libspdk_blob_bdev.a 00:04:41.397 SO libspdk_accel_error.so.2.0 00:04:41.397 SO libspdk_blob_bdev.so.11.0 00:04:41.397 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:41.397 SYMLINK libspdk_accel_error.so 00:04:41.397 CC module/scheduler/gscheduler/gscheduler.o 00:04:41.397 SYMLINK libspdk_blob_bdev.so 00:04:41.654 LIB libspdk_sock_posix.a 00:04:41.654 LIB libspdk_scheduler_dpdk_governor.a 00:04:41.654 SO libspdk_sock_posix.so.6.0 00:04:41.654 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:41.654 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:41.654 SYMLINK libspdk_sock_posix.so 00:04:41.654 LIB libspdk_scheduler_gscheduler.a 00:04:41.654 CC module/blobfs/bdev/blobfs_bdev.o 00:04:41.654 CC module/bdev/error/vbdev_error.o 00:04:41.654 CC module/bdev/lvol/vbdev_lvol.o 00:04:41.654 CC module/bdev/malloc/bdev_malloc.o 00:04:41.654 CC module/bdev/gpt/gpt.o 00:04:41.654 SO libspdk_scheduler_gscheduler.so.4.0 00:04:41.911 CC module/bdev/delay/vbdev_delay.o 00:04:41.911 SYMLINK libspdk_scheduler_gscheduler.so 00:04:41.911 CC module/bdev/gpt/vbdev_gpt.o 00:04:41.911 CC module/bdev/nvme/bdev_nvme.o 00:04:41.911 CC module/bdev/null/bdev_null.o 00:04:41.911 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:41.911 CC module/bdev/error/vbdev_error_rpc.o 00:04:42.168 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:42.168 LIB libspdk_blobfs_bdev.a 00:04:42.168 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:42.168 CC module/bdev/null/bdev_null_rpc.o 00:04:42.168 SO libspdk_blobfs_bdev.so.6.0 00:04:42.168 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:42.168 SYMLINK libspdk_blobfs_bdev.so 00:04:42.168 CC module/bdev/nvme/nvme_rpc.o 00:04:42.168 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:42.168 LIB libspdk_bdev_malloc.a 00:04:42.168 SO libspdk_bdev_malloc.so.6.0 00:04:42.168 LIB libspdk_bdev_null.a 00:04:42.168 LIB libspdk_bdev_delay.a 00:04:42.168 SO libspdk_bdev_null.so.6.0 00:04:42.454 SO libspdk_bdev_delay.so.6.0 00:04:42.454 SYMLINK libspdk_bdev_malloc.so 00:04:42.454 LIB libspdk_bdev_error.a 00:04:42.454 LIB libspdk_bdev_gpt.a 00:04:42.454 SYMLINK libspdk_bdev_null.so 00:04:42.454 CC module/bdev/nvme/bdev_mdns_client.o 00:04:42.454 SO libspdk_bdev_error.so.6.0 00:04:42.454 SO libspdk_bdev_gpt.so.6.0 00:04:42.454 SYMLINK libspdk_bdev_delay.so 00:04:42.454 SYMLINK libspdk_bdev_error.so 00:04:42.454 SYMLINK libspdk_bdev_gpt.so 00:04:42.454 LIB libspdk_bdev_lvol.a 00:04:42.454 CC module/bdev/passthru/vbdev_passthru.o 00:04:42.454 CC module/bdev/raid/bdev_raid.o 00:04:42.713 SO libspdk_bdev_lvol.so.6.0 00:04:42.713 CC module/bdev/raid/bdev_raid_rpc.o 00:04:42.713 LIB libspdk_sock_uring.a 00:04:42.713 CC module/bdev/split/vbdev_split.o 00:04:42.713 SO libspdk_sock_uring.so.5.0 00:04:42.713 SYMLINK libspdk_bdev_lvol.so 00:04:42.713 CC module/bdev/split/vbdev_split_rpc.o 00:04:42.713 CC module/bdev/uring/bdev_uring.o 00:04:42.713 CC module/bdev/uring/bdev_uring_rpc.o 00:04:42.713 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:42.713 SYMLINK libspdk_sock_uring.so 00:04:42.713 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:42.713 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:42.713 CC module/bdev/raid/bdev_raid_sb.o 00:04:42.970 CC module/bdev/nvme/vbdev_opal.o 00:04:42.970 CC module/bdev/raid/raid0.o 00:04:42.970 LIB libspdk_bdev_passthru.a 00:04:42.970 SO libspdk_bdev_passthru.so.6.0 00:04:42.970 LIB libspdk_bdev_split.a 00:04:42.970 LIB libspdk_bdev_zone_block.a 00:04:42.970 SO libspdk_bdev_split.so.6.0 00:04:42.970 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:42.970 SO libspdk_bdev_zone_block.so.6.0 00:04:43.228 SYMLINK libspdk_bdev_split.so 00:04:43.228 SYMLINK libspdk_bdev_passthru.so 00:04:43.228 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:43.228 CC module/bdev/raid/raid1.o 00:04:43.228 SYMLINK libspdk_bdev_zone_block.so 00:04:43.228 CC module/bdev/raid/concat.o 00:04:43.228 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:43.228 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:43.228 CC module/bdev/aio/bdev_aio.o 00:04:43.228 CC module/bdev/iscsi/bdev_iscsi.o 00:04:43.485 CC module/bdev/ftl/bdev_ftl.o 00:04:43.485 LIB libspdk_bdev_uring.a 00:04:43.485 CC module/bdev/aio/bdev_aio_rpc.o 00:04:43.485 SO libspdk_bdev_uring.so.6.0 00:04:43.485 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:43.485 SYMLINK libspdk_bdev_uring.so 00:04:43.485 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:43.485 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:43.742 LIB libspdk_bdev_aio.a 00:04:43.742 SO libspdk_bdev_aio.so.6.0 00:04:43.742 LIB libspdk_bdev_raid.a 00:04:43.742 LIB libspdk_bdev_iscsi.a 00:04:43.742 SYMLINK libspdk_bdev_aio.so 00:04:43.742 SO libspdk_bdev_iscsi.so.6.0 00:04:43.743 LIB libspdk_bdev_nvme.a 00:04:43.743 SO libspdk_bdev_raid.so.6.0 00:04:43.743 SYMLINK libspdk_bdev_iscsi.so 00:04:43.743 LIB libspdk_bdev_virtio.a 00:04:43.743 SO libspdk_bdev_nvme.so.7.0 00:04:43.743 SO libspdk_bdev_virtio.so.6.0 00:04:44.000 SYMLINK libspdk_bdev_nvme.so 00:04:44.000 SYMLINK libspdk_bdev_raid.so 00:04:44.000 LIB libspdk_bdev_ftl.a 00:04:44.000 SYMLINK libspdk_bdev_virtio.so 00:04:44.000 SO libspdk_bdev_ftl.so.6.0 00:04:44.000 SYMLINK libspdk_bdev_ftl.so 00:04:44.565 CC module/event/subsystems/vmd/vmd.o 00:04:44.565 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:44.565 CC module/event/subsystems/keyring/keyring.o 00:04:44.565 CC module/event/subsystems/iobuf/iobuf.o 00:04:44.565 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:44.565 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:44.565 CC module/event/subsystems/sock/sock.o 00:04:44.565 CC module/event/subsystems/scheduler/scheduler.o 00:04:44.823 LIB libspdk_event_vhost_blk.a 00:04:44.823 LIB libspdk_event_keyring.a 00:04:44.823 LIB libspdk_event_sock.a 00:04:44.823 LIB libspdk_event_scheduler.a 00:04:44.823 SO libspdk_event_keyring.so.1.0 00:04:44.823 SO libspdk_event_sock.so.5.0 00:04:44.823 LIB libspdk_event_vmd.a 00:04:44.823 SO libspdk_event_vhost_blk.so.3.0 00:04:44.823 SO libspdk_event_scheduler.so.4.0 00:04:44.823 SO libspdk_event_vmd.so.6.0 00:04:44.823 SYMLINK libspdk_event_keyring.so 00:04:44.823 SYMLINK libspdk_event_sock.so 00:04:44.823 SYMLINK libspdk_event_scheduler.so 00:04:44.823 SYMLINK libspdk_event_vhost_blk.so 00:04:44.823 SYMLINK libspdk_event_vmd.so 00:04:44.823 LIB libspdk_event_iobuf.a 00:04:45.081 SO libspdk_event_iobuf.so.3.0 00:04:45.081 SYMLINK libspdk_event_iobuf.so 00:04:45.340 CC module/event/subsystems/accel/accel.o 00:04:45.597 LIB libspdk_event_accel.a 00:04:45.597 SO libspdk_event_accel.so.6.0 00:04:45.597 SYMLINK libspdk_event_accel.so 00:04:46.163 CC module/event/subsystems/bdev/bdev.o 00:04:46.421 LIB libspdk_event_bdev.a 00:04:46.421 SO libspdk_event_bdev.so.6.0 00:04:46.421 SYMLINK libspdk_event_bdev.so 00:04:46.679 CC module/event/subsystems/scsi/scsi.o 00:04:46.679 CC module/event/subsystems/nbd/nbd.o 00:04:46.679 CC module/event/subsystems/ublk/ublk.o 00:04:46.679 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:46.679 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:46.937 LIB libspdk_event_nbd.a 00:04:46.937 LIB libspdk_event_scsi.a 00:04:46.937 SO libspdk_event_nbd.so.6.0 00:04:46.937 SO libspdk_event_scsi.so.6.0 00:04:46.937 LIB libspdk_event_ublk.a 00:04:46.937 SO libspdk_event_ublk.so.3.0 00:04:46.937 SYMLINK libspdk_event_nbd.so 00:04:47.195 SYMLINK libspdk_event_scsi.so 00:04:47.195 LIB libspdk_event_nvmf.a 00:04:47.195 SYMLINK libspdk_event_ublk.so 00:04:47.195 SO libspdk_event_nvmf.so.6.0 00:04:47.195 SYMLINK libspdk_event_nvmf.so 00:04:47.453 CC module/event/subsystems/iscsi/iscsi.o 00:04:47.453 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:47.453 LIB libspdk_event_vhost_scsi.a 00:04:47.713 SO libspdk_event_vhost_scsi.so.3.0 00:04:47.713 LIB libspdk_event_iscsi.a 00:04:47.713 SYMLINK libspdk_event_vhost_scsi.so 00:04:47.713 SO libspdk_event_iscsi.so.6.0 00:04:47.713 SYMLINK libspdk_event_iscsi.so 00:04:47.983 SO libspdk.so.6.0 00:04:47.983 SYMLINK libspdk.so 00:04:48.242 CC app/trace_record/trace_record.o 00:04:48.242 CXX app/trace/trace.o 00:04:48.242 CC app/spdk_lspci/spdk_lspci.o 00:04:48.242 CC app/iscsi_tgt/iscsi_tgt.o 00:04:48.242 CC app/spdk_tgt/spdk_tgt.o 00:04:48.242 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:48.242 CC app/nvmf_tgt/nvmf_main.o 00:04:48.500 CC examples/ioat/perf/perf.o 00:04:48.500 CC examples/util/zipf/zipf.o 00:04:48.500 CC test/thread/poller_perf/poller_perf.o 00:04:48.500 LINK spdk_lspci 00:04:48.500 LINK nvmf_tgt 00:04:48.500 LINK interrupt_tgt 00:04:48.500 LINK spdk_trace_record 00:04:48.500 LINK spdk_tgt 00:04:48.758 LINK iscsi_tgt 00:04:48.758 LINK poller_perf 00:04:48.758 LINK ioat_perf 00:04:48.758 LINK spdk_trace 00:04:48.758 LINK zipf 00:04:48.758 CC app/spdk_nvme_perf/perf.o 00:04:48.758 CC app/spdk_nvme_identify/identify.o 00:04:49.017 CC examples/ioat/verify/verify.o 00:04:49.017 CC app/spdk_top/spdk_top.o 00:04:49.017 CC app/spdk_nvme_discover/discovery_aer.o 00:04:49.017 CC app/spdk_dd/spdk_dd.o 00:04:49.017 CC test/app/bdev_svc/bdev_svc.o 00:04:49.017 CC test/dma/test_dma/test_dma.o 00:04:49.017 LINK verify 00:04:49.017 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:49.017 CC examples/thread/thread/thread_ex.o 00:04:49.275 LINK spdk_nvme_discover 00:04:49.275 LINK bdev_svc 00:04:49.275 LINK thread 00:04:49.275 LINK test_dma 00:04:49.534 TEST_HEADER include/spdk/accel.h 00:04:49.534 TEST_HEADER include/spdk/accel_module.h 00:04:49.534 TEST_HEADER include/spdk/assert.h 00:04:49.534 TEST_HEADER include/spdk/barrier.h 00:04:49.534 TEST_HEADER include/spdk/base64.h 00:04:49.534 CC examples/sock/hello_world/hello_sock.o 00:04:49.534 TEST_HEADER include/spdk/bdev.h 00:04:49.534 TEST_HEADER include/spdk/bdev_module.h 00:04:49.534 TEST_HEADER include/spdk/bdev_zone.h 00:04:49.534 TEST_HEADER include/spdk/bit_array.h 00:04:49.534 TEST_HEADER include/spdk/bit_pool.h 00:04:49.534 TEST_HEADER include/spdk/blob_bdev.h 00:04:49.534 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:49.534 LINK nvme_fuzz 00:04:49.534 TEST_HEADER include/spdk/blobfs.h 00:04:49.534 TEST_HEADER include/spdk/blob.h 00:04:49.534 TEST_HEADER include/spdk/conf.h 00:04:49.534 TEST_HEADER include/spdk/config.h 00:04:49.534 TEST_HEADER include/spdk/cpuset.h 00:04:49.534 TEST_HEADER include/spdk/crc16.h 00:04:49.534 TEST_HEADER include/spdk/crc32.h 00:04:49.534 TEST_HEADER include/spdk/crc64.h 00:04:49.534 TEST_HEADER include/spdk/dif.h 00:04:49.534 TEST_HEADER include/spdk/dma.h 00:04:49.534 TEST_HEADER include/spdk/endian.h 00:04:49.534 TEST_HEADER include/spdk/env_dpdk.h 00:04:49.534 TEST_HEADER include/spdk/env.h 00:04:49.534 TEST_HEADER include/spdk/event.h 00:04:49.534 TEST_HEADER include/spdk/fd_group.h 00:04:49.534 TEST_HEADER include/spdk/fd.h 00:04:49.534 TEST_HEADER include/spdk/file.h 00:04:49.534 TEST_HEADER include/spdk/ftl.h 00:04:49.534 TEST_HEADER include/spdk/gpt_spec.h 00:04:49.534 TEST_HEADER include/spdk/hexlify.h 00:04:49.534 TEST_HEADER include/spdk/histogram_data.h 00:04:49.534 TEST_HEADER include/spdk/idxd.h 00:04:49.534 TEST_HEADER include/spdk/idxd_spec.h 00:04:49.534 TEST_HEADER include/spdk/init.h 00:04:49.534 TEST_HEADER include/spdk/ioat.h 00:04:49.534 TEST_HEADER include/spdk/ioat_spec.h 00:04:49.534 TEST_HEADER include/spdk/iscsi_spec.h 00:04:49.534 TEST_HEADER include/spdk/json.h 00:04:49.534 TEST_HEADER include/spdk/jsonrpc.h 00:04:49.534 TEST_HEADER include/spdk/keyring.h 00:04:49.534 TEST_HEADER include/spdk/keyring_module.h 00:04:49.534 TEST_HEADER include/spdk/likely.h 00:04:49.534 TEST_HEADER include/spdk/log.h 00:04:49.534 TEST_HEADER include/spdk/lvol.h 00:04:49.534 TEST_HEADER include/spdk/memory.h 00:04:49.534 TEST_HEADER include/spdk/mmio.h 00:04:49.534 TEST_HEADER include/spdk/nbd.h 00:04:49.534 TEST_HEADER include/spdk/net.h 00:04:49.534 TEST_HEADER include/spdk/notify.h 00:04:49.534 TEST_HEADER include/spdk/nvme.h 00:04:49.534 TEST_HEADER include/spdk/nvme_intel.h 00:04:49.534 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:49.534 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:49.534 TEST_HEADER include/spdk/nvme_spec.h 00:04:49.534 LINK spdk_nvme_perf 00:04:49.534 TEST_HEADER include/spdk/nvme_zns.h 00:04:49.534 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:49.534 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:49.534 LINK spdk_nvme_identify 00:04:49.534 TEST_HEADER include/spdk/nvmf.h 00:04:49.534 TEST_HEADER include/spdk/nvmf_spec.h 00:04:49.534 TEST_HEADER include/spdk/nvmf_transport.h 00:04:49.534 TEST_HEADER include/spdk/opal.h 00:04:49.534 TEST_HEADER include/spdk/opal_spec.h 00:04:49.534 TEST_HEADER include/spdk/pci_ids.h 00:04:49.534 TEST_HEADER include/spdk/pipe.h 00:04:49.534 TEST_HEADER include/spdk/queue.h 00:04:49.534 TEST_HEADER include/spdk/reduce.h 00:04:49.534 TEST_HEADER include/spdk/rpc.h 00:04:49.534 TEST_HEADER include/spdk/scheduler.h 00:04:49.534 TEST_HEADER include/spdk/scsi.h 00:04:49.534 TEST_HEADER include/spdk/scsi_spec.h 00:04:49.534 TEST_HEADER include/spdk/sock.h 00:04:49.534 TEST_HEADER include/spdk/stdinc.h 00:04:49.534 TEST_HEADER include/spdk/string.h 00:04:49.534 TEST_HEADER include/spdk/thread.h 00:04:49.534 TEST_HEADER include/spdk/trace.h 00:04:49.534 TEST_HEADER include/spdk/trace_parser.h 00:04:49.534 TEST_HEADER include/spdk/tree.h 00:04:49.534 TEST_HEADER include/spdk/ublk.h 00:04:49.534 TEST_HEADER include/spdk/util.h 00:04:49.534 TEST_HEADER include/spdk/uuid.h 00:04:49.534 TEST_HEADER include/spdk/version.h 00:04:49.534 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:49.534 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:49.534 TEST_HEADER include/spdk/vhost.h 00:04:49.534 TEST_HEADER include/spdk/vmd.h 00:04:49.534 TEST_HEADER include/spdk/xor.h 00:04:49.534 TEST_HEADER include/spdk/zipf.h 00:04:49.534 CXX test/cpp_headers/accel.o 00:04:49.534 LINK hello_sock 00:04:49.534 CC test/env/mem_callbacks/mem_callbacks.o 00:04:49.792 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:49.792 CC test/env/vtophys/vtophys.o 00:04:49.792 CC test/event/event_perf/event_perf.o 00:04:49.792 CXX test/cpp_headers/accel_module.o 00:04:49.792 LINK spdk_dd 00:04:49.792 CC test/event/reactor/reactor.o 00:04:49.792 CC test/event/reactor_perf/reactor_perf.o 00:04:49.792 LINK vtophys 00:04:49.792 LINK event_perf 00:04:50.050 LINK spdk_top 00:04:50.050 CXX test/cpp_headers/assert.o 00:04:50.050 LINK reactor 00:04:50.050 CC examples/vmd/lsvmd/lsvmd.o 00:04:50.050 LINK reactor_perf 00:04:50.050 CXX test/cpp_headers/barrier.o 00:04:50.050 CC test/event/app_repeat/app_repeat.o 00:04:50.050 LINK lsvmd 00:04:50.050 LINK mem_callbacks 00:04:50.307 CC test/event/scheduler/scheduler.o 00:04:50.307 CC examples/vmd/led/led.o 00:04:50.307 CC test/rpc_client/rpc_client_test.o 00:04:50.307 CXX test/cpp_headers/base64.o 00:04:50.307 CC app/fio/nvme/fio_plugin.o 00:04:50.307 LINK app_repeat 00:04:50.307 LINK led 00:04:50.307 LINK scheduler 00:04:50.307 LINK rpc_client_test 00:04:50.307 CXX test/cpp_headers/bdev.o 00:04:50.307 CC app/fio/bdev/fio_plugin.o 00:04:50.307 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:50.565 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:50.565 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:50.565 CXX test/cpp_headers/bdev_module.o 00:04:50.565 LINK env_dpdk_post_init 00:04:50.565 CC test/app/histogram_perf/histogram_perf.o 00:04:50.565 CC app/vhost/vhost.o 00:04:50.565 CC test/env/memory/memory_ut.o 00:04:50.823 CC examples/idxd/perf/perf.o 00:04:50.823 CXX test/cpp_headers/bdev_zone.o 00:04:50.823 LINK histogram_perf 00:04:50.823 LINK spdk_bdev 00:04:50.823 LINK vhost_fuzz 00:04:50.823 LINK vhost 00:04:50.823 CXX test/cpp_headers/bit_array.o 00:04:51.080 LINK spdk_nvme 00:04:51.080 LINK idxd_perf 00:04:51.080 CXX test/cpp_headers/bit_pool.o 00:04:51.080 LINK iscsi_fuzz 00:04:51.080 CC examples/nvme/hello_world/hello_world.o 00:04:51.080 CC test/env/pci/pci_ut.o 00:04:51.080 CC examples/accel/perf/accel_perf.o 00:04:51.080 CC examples/nvme/reconnect/reconnect.o 00:04:51.080 CXX test/cpp_headers/blob_bdev.o 00:04:51.454 CC examples/blob/hello_world/hello_blob.o 00:04:51.454 CC test/accel/dif/dif.o 00:04:51.454 CC examples/blob/cli/blobcli.o 00:04:51.454 CXX test/cpp_headers/blobfs_bdev.o 00:04:51.454 LINK hello_world 00:04:51.454 LINK hello_blob 00:04:51.454 CC test/app/jsoncat/jsoncat.o 00:04:51.454 LINK reconnect 00:04:51.454 LINK pci_ut 00:04:51.454 CXX test/cpp_headers/blobfs.o 00:04:51.454 LINK accel_perf 00:04:51.712 LINK jsoncat 00:04:51.712 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:51.712 LINK memory_ut 00:04:51.712 CXX test/cpp_headers/blob.o 00:04:51.712 LINK dif 00:04:51.712 CC examples/nvme/arbitration/arbitration.o 00:04:51.712 CC examples/nvme/hotplug/hotplug.o 00:04:51.970 CXX test/cpp_headers/conf.o 00:04:51.970 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:51.970 CC test/app/stub/stub.o 00:04:51.970 CC examples/nvme/abort/abort.o 00:04:51.970 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:51.970 CXX test/cpp_headers/config.o 00:04:51.970 LINK stub 00:04:51.970 CXX test/cpp_headers/cpuset.o 00:04:51.970 LINK hotplug 00:04:51.970 LINK cmb_copy 00:04:51.970 LINK arbitration 00:04:51.970 LINK nvme_manage 00:04:52.228 LINK pmr_persistence 00:04:52.228 CXX test/cpp_headers/crc16.o 00:04:52.228 CC examples/bdev/hello_world/hello_bdev.o 00:04:52.228 CXX test/cpp_headers/crc32.o 00:04:52.228 CXX test/cpp_headers/crc64.o 00:04:52.228 LINK abort 00:04:52.228 CXX test/cpp_headers/dif.o 00:04:52.228 CXX test/cpp_headers/dma.o 00:04:52.228 CXX test/cpp_headers/endian.o 00:04:52.228 LINK blobcli 00:04:52.486 CXX test/cpp_headers/env_dpdk.o 00:04:52.486 CXX test/cpp_headers/env.o 00:04:52.486 CXX test/cpp_headers/event.o 00:04:52.486 CXX test/cpp_headers/fd_group.o 00:04:52.486 LINK hello_bdev 00:04:52.486 CXX test/cpp_headers/fd.o 00:04:52.486 CXX test/cpp_headers/file.o 00:04:52.486 CC examples/bdev/bdevperf/bdevperf.o 00:04:52.486 CXX test/cpp_headers/ftl.o 00:04:52.486 CXX test/cpp_headers/gpt_spec.o 00:04:52.486 CXX test/cpp_headers/hexlify.o 00:04:52.486 CXX test/cpp_headers/histogram_data.o 00:04:52.486 CXX test/cpp_headers/idxd.o 00:04:52.743 CC test/blobfs/mkfs/mkfs.o 00:04:52.743 CXX test/cpp_headers/idxd_spec.o 00:04:52.743 CC test/nvme/aer/aer.o 00:04:52.743 CC test/nvme/reset/reset.o 00:04:52.743 CC test/nvme/e2edp/nvme_dp.o 00:04:52.743 CC test/nvme/overhead/overhead.o 00:04:52.743 CC test/lvol/esnap/esnap.o 00:04:52.743 CC test/nvme/sgl/sgl.o 00:04:52.743 CXX test/cpp_headers/init.o 00:04:52.743 LINK mkfs 00:04:53.001 CC test/nvme/err_injection/err_injection.o 00:04:53.001 LINK aer 00:04:53.001 LINK reset 00:04:53.001 CXX test/cpp_headers/ioat.o 00:04:53.001 LINK nvme_dp 00:04:53.001 CXX test/cpp_headers/ioat_spec.o 00:04:53.259 LINK bdevperf 00:04:53.259 LINK err_injection 00:04:53.259 CXX test/cpp_headers/iscsi_spec.o 00:04:53.259 LINK sgl 00:04:53.259 CXX test/cpp_headers/json.o 00:04:53.259 LINK overhead 00:04:53.259 CC test/nvme/startup/startup.o 00:04:53.259 CC test/nvme/reserve/reserve.o 00:04:53.259 CC test/nvme/simple_copy/simple_copy.o 00:04:53.259 CXX test/cpp_headers/jsonrpc.o 00:04:53.517 CXX test/cpp_headers/keyring.o 00:04:53.517 LINK startup 00:04:53.517 CC test/nvme/connect_stress/connect_stress.o 00:04:53.517 LINK reserve 00:04:53.517 CC test/nvme/boot_partition/boot_partition.o 00:04:53.517 CC test/nvme/compliance/nvme_compliance.o 00:04:53.517 CXX test/cpp_headers/keyring_module.o 00:04:53.517 LINK simple_copy 00:04:53.517 LINK connect_stress 00:04:53.517 CC test/nvme/fused_ordering/fused_ordering.o 00:04:53.775 LINK boot_partition 00:04:53.775 CXX test/cpp_headers/likely.o 00:04:53.775 CC test/nvme/fdp/fdp.o 00:04:53.775 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:53.775 CXX test/cpp_headers/log.o 00:04:53.775 CC examples/nvmf/nvmf/nvmf.o 00:04:53.775 LINK nvme_compliance 00:04:53.775 CC test/nvme/cuse/cuse.o 00:04:53.775 CXX test/cpp_headers/lvol.o 00:04:53.775 LINK fused_ordering 00:04:54.033 LINK doorbell_aers 00:04:54.033 CXX test/cpp_headers/memory.o 00:04:54.033 CXX test/cpp_headers/mmio.o 00:04:54.033 LINK fdp 00:04:54.033 CXX test/cpp_headers/nbd.o 00:04:54.033 CXX test/cpp_headers/net.o 00:04:54.033 CXX test/cpp_headers/notify.o 00:04:54.033 LINK nvmf 00:04:54.033 CXX test/cpp_headers/nvme.o 00:04:54.033 CXX test/cpp_headers/nvme_intel.o 00:04:54.033 CXX test/cpp_headers/nvme_ocssd.o 00:04:54.291 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:54.291 CXX test/cpp_headers/nvme_spec.o 00:04:54.291 CXX test/cpp_headers/nvme_zns.o 00:04:54.291 CC test/bdev/bdevio/bdevio.o 00:04:54.291 CXX test/cpp_headers/nvmf_cmd.o 00:04:54.291 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:54.291 CXX test/cpp_headers/nvmf.o 00:04:54.291 CXX test/cpp_headers/nvmf_spec.o 00:04:54.291 CXX test/cpp_headers/nvmf_transport.o 00:04:54.291 CXX test/cpp_headers/opal.o 00:04:54.291 CXX test/cpp_headers/opal_spec.o 00:04:54.549 CXX test/cpp_headers/pci_ids.o 00:04:54.549 CXX test/cpp_headers/pipe.o 00:04:54.549 CXX test/cpp_headers/queue.o 00:04:54.549 CXX test/cpp_headers/reduce.o 00:04:54.549 CXX test/cpp_headers/rpc.o 00:04:54.549 CXX test/cpp_headers/scheduler.o 00:04:54.549 LINK bdevio 00:04:54.549 CXX test/cpp_headers/scsi.o 00:04:54.549 CXX test/cpp_headers/scsi_spec.o 00:04:54.549 CXX test/cpp_headers/sock.o 00:04:54.549 CXX test/cpp_headers/stdinc.o 00:04:54.549 CXX test/cpp_headers/string.o 00:04:54.807 CXX test/cpp_headers/thread.o 00:04:54.807 CXX test/cpp_headers/trace.o 00:04:54.807 CXX test/cpp_headers/trace_parser.o 00:04:54.807 CXX test/cpp_headers/tree.o 00:04:54.807 CXX test/cpp_headers/ublk.o 00:04:54.807 CXX test/cpp_headers/util.o 00:04:54.807 CXX test/cpp_headers/uuid.o 00:04:54.807 CXX test/cpp_headers/version.o 00:04:54.807 CXX test/cpp_headers/vfio_user_pci.o 00:04:54.807 CXX test/cpp_headers/vfio_user_spec.o 00:04:54.807 CXX test/cpp_headers/vhost.o 00:04:54.807 CXX test/cpp_headers/vmd.o 00:04:54.807 CXX test/cpp_headers/xor.o 00:04:54.807 CXX test/cpp_headers/zipf.o 00:04:55.373 LINK cuse 00:04:56.748 LINK esnap 00:04:57.316 ************************************ 00:04:57.316 END TEST make 00:04:57.316 ************************************ 00:04:57.316 00:04:57.316 real 0m57.247s 00:04:57.316 user 4m41.332s 00:04:57.316 sys 1m19.069s 00:04:57.316 22:06:29 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:57.316 22:06:29 make -- common/autotest_common.sh@10 -- $ set +x 00:04:57.574 22:06:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:57.574 22:06:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:57.575 22:06:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:57.575 22:06:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.575 22:06:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:57.575 22:06:29 -- pm/common@44 -- $ pid=5946 00:04:57.575 22:06:29 -- pm/common@50 -- $ kill -TERM 5946 00:04:57.575 22:06:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.575 22:06:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:57.575 22:06:29 -- pm/common@44 -- $ pid=5948 00:04:57.575 22:06:29 -- pm/common@50 -- $ kill -TERM 5948 00:04:57.575 22:06:29 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:57.575 22:06:29 -- nvmf/common.sh@7 -- # uname -s 00:04:57.575 22:06:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.575 22:06:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.575 22:06:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.575 22:06:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.575 22:06:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.575 22:06:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.575 22:06:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.575 22:06:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.575 22:06:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.575 22:06:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.575 22:06:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:870cc518-1c26-4e82-9298-fb61f38a7fd8 00:04:57.575 22:06:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=870cc518-1c26-4e82-9298-fb61f38a7fd8 00:04:57.575 22:06:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.575 22:06:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.575 22:06:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:57.575 22:06:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.575 22:06:29 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:57.575 22:06:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.575 22:06:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.575 22:06:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.575 22:06:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.575 22:06:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.575 22:06:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.575 22:06:29 -- paths/export.sh@5 -- # export PATH 00:04:57.575 22:06:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.575 22:06:29 -- nvmf/common.sh@47 -- # : 0 00:04:57.575 22:06:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:57.575 22:06:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:57.575 22:06:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.575 22:06:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.575 22:06:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.575 22:06:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:57.575 22:06:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:57.575 22:06:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:57.575 22:06:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:57.575 22:06:29 -- spdk/autotest.sh@32 -- # uname -s 00:04:57.575 22:06:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:57.575 22:06:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:57.575 22:06:29 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:57.575 22:06:29 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:57.575 22:06:29 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:57.575 22:06:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:57.575 22:06:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:57.575 22:06:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:57.575 22:06:29 -- spdk/autotest.sh@48 -- # udevadm_pid=66269 00:04:57.575 22:06:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:57.575 22:06:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:57.575 22:06:29 -- pm/common@17 -- # local monitor 00:04:57.575 22:06:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.575 22:06:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.575 22:06:29 -- pm/common@25 -- # sleep 1 00:04:57.575 22:06:29 -- pm/common@21 -- # date +%s 00:04:57.575 22:06:29 -- pm/common@21 -- # date +%s 00:04:57.575 22:06:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721772389 00:04:57.575 22:06:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721772389 00:04:57.833 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721772389_collect-vmstat.pm.log 00:04:57.833 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721772389_collect-cpu-load.pm.log 00:04:58.770 22:06:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:58.770 22:06:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:58.770 22:06:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.770 22:06:30 -- common/autotest_common.sh@10 -- # set +x 00:04:58.770 22:06:30 -- spdk/autotest.sh@59 -- # create_test_list 00:04:58.770 22:06:30 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:58.770 22:06:30 -- common/autotest_common.sh@10 -- # set +x 00:04:58.770 22:06:30 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:58.770 22:06:30 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:58.770 22:06:30 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:58.770 22:06:30 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:58.770 22:06:30 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:58.770 22:06:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:58.770 22:06:30 -- common/autotest_common.sh@1453 -- # uname 00:04:58.770 22:06:30 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:04:58.770 22:06:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:58.770 22:06:30 -- common/autotest_common.sh@1473 -- # uname 00:04:58.770 22:06:30 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:04:58.770 22:06:30 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:58.770 22:06:30 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:58.770 22:06:30 -- spdk/autotest.sh@72 -- # hash lcov 00:04:58.770 22:06:30 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:58.770 22:06:30 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:58.770 --rc lcov_branch_coverage=1 00:04:58.770 --rc lcov_function_coverage=1 00:04:58.770 --rc genhtml_branch_coverage=1 00:04:58.770 --rc genhtml_function_coverage=1 00:04:58.770 --rc genhtml_legend=1 00:04:58.770 --rc geninfo_all_blocks=1 00:04:58.770 ' 00:04:58.770 22:06:30 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:58.770 --rc lcov_branch_coverage=1 00:04:58.770 --rc lcov_function_coverage=1 00:04:58.770 --rc genhtml_branch_coverage=1 00:04:58.770 --rc genhtml_function_coverage=1 00:04:58.770 --rc genhtml_legend=1 00:04:58.770 --rc geninfo_all_blocks=1 00:04:58.770 ' 00:04:58.770 22:06:30 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:58.770 --rc lcov_branch_coverage=1 00:04:58.770 --rc lcov_function_coverage=1 00:04:58.770 --rc genhtml_branch_coverage=1 00:04:58.770 --rc genhtml_function_coverage=1 00:04:58.770 --rc genhtml_legend=1 00:04:58.770 --rc geninfo_all_blocks=1 00:04:58.770 --no-external' 00:04:58.770 22:06:30 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:58.770 --rc lcov_branch_coverage=1 00:04:58.770 --rc lcov_function_coverage=1 00:04:58.770 --rc genhtml_branch_coverage=1 00:04:58.770 --rc genhtml_function_coverage=1 00:04:58.770 --rc genhtml_legend=1 00:04:58.770 --rc geninfo_all_blocks=1 00:04:58.770 --no-external' 00:04:58.770 22:06:30 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:58.770 lcov: LCOV version 1.14 00:04:58.770 22:06:30 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:13.673 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:13.673 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:23.676 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:23.676 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:23.676 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:23.676 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:23.676 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:23.676 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:23.676 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:23.676 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:23.676 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:23.676 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:23.676 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:23.676 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:23.676 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:23.676 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:23.676 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:23.676 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:23.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:23.677 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:23.678 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:23.678 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:26.967 22:06:58 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:26.968 22:06:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.968 22:06:58 -- common/autotest_common.sh@10 -- # set +x 00:05:26.968 22:06:58 -- spdk/autotest.sh@91 -- # rm -f 00:05:26.968 22:06:58 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:27.537 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.537 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:27.537 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:27.537 22:06:59 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:27.537 22:06:59 -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:05:27.537 22:06:59 -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:05:27.537 22:06:59 -- common/autotest_common.sh@1668 -- # local nvme bdf 00:05:27.537 22:06:59 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:27.537 22:06:59 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:27.537 22:06:59 -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:05:27.537 22:06:59 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:27.537 22:06:59 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:27.537 22:06:59 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:27.537 22:06:59 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:27.537 22:06:59 -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:05:27.537 22:06:59 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:27.537 22:06:59 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:27.537 22:06:59 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:27.537 22:06:59 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:27.537 22:06:59 -- common/autotest_common.sh@1660 -- # local device=nvme1n2 00:05:27.537 22:06:59 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:27.537 22:06:59 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:27.537 22:06:59 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:27.537 22:06:59 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:27.537 22:06:59 -- common/autotest_common.sh@1660 -- # local device=nvme1n3 00:05:27.537 22:06:59 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:27.537 22:06:59 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:27.537 22:06:59 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:27.537 22:06:59 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:27.537 22:06:59 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:27.537 22:06:59 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:27.537 22:06:59 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:27.537 22:06:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:27.796 No valid GPT data, bailing 00:05:27.796 22:06:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:27.796 22:06:59 -- scripts/common.sh@391 -- # pt= 00:05:27.796 22:06:59 -- scripts/common.sh@392 -- # return 1 00:05:27.796 22:06:59 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:27.796 1+0 records in 00:05:27.796 1+0 records out 00:05:27.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442615 s, 237 MB/s 00:05:27.796 22:06:59 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:27.796 22:06:59 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:27.796 22:06:59 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:27.796 22:06:59 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:27.796 22:06:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:27.796 No valid GPT data, bailing 00:05:27.796 22:06:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:27.796 22:06:59 -- scripts/common.sh@391 -- # pt= 00:05:27.796 22:06:59 -- scripts/common.sh@392 -- # return 1 00:05:27.796 22:06:59 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:27.796 1+0 records in 00:05:27.796 1+0 records out 00:05:27.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00517616 s, 203 MB/s 00:05:27.796 22:06:59 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:27.796 22:06:59 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:27.796 22:06:59 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:27.796 22:06:59 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:27.796 22:06:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:27.796 No valid GPT data, bailing 00:05:27.796 22:06:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:27.796 22:06:59 -- scripts/common.sh@391 -- # pt= 00:05:27.796 22:06:59 -- scripts/common.sh@392 -- # return 1 00:05:27.796 22:06:59 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:27.796 1+0 records in 00:05:27.796 1+0 records out 00:05:27.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00601876 s, 174 MB/s 00:05:27.796 22:06:59 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:27.796 22:06:59 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:27.796 22:06:59 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:27.796 22:06:59 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:27.796 22:06:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:28.056 No valid GPT data, bailing 00:05:28.056 22:07:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:28.056 22:07:00 -- scripts/common.sh@391 -- # pt= 00:05:28.056 22:07:00 -- scripts/common.sh@392 -- # return 1 00:05:28.056 22:07:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:28.056 1+0 records in 00:05:28.056 1+0 records out 00:05:28.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503901 s, 208 MB/s 00:05:28.056 22:07:00 -- spdk/autotest.sh@118 -- # sync 00:05:28.056 22:07:00 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:28.056 22:07:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:28.056 22:07:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:30.590 22:07:02 -- spdk/autotest.sh@124 -- # uname -s 00:05:30.590 22:07:02 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:30.590 22:07:02 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:30.590 22:07:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.590 22:07:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.590 22:07:02 -- common/autotest_common.sh@10 -- # set +x 00:05:30.590 ************************************ 00:05:30.590 START TEST setup.sh 00:05:30.590 ************************************ 00:05:30.590 22:07:02 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:30.590 * Looking for test storage... 00:05:30.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:30.590 22:07:02 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:30.590 22:07:02 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:30.590 22:07:02 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:30.590 22:07:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.590 22:07:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.590 22:07:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:30.590 ************************************ 00:05:30.590 START TEST acl 00:05:30.590 ************************************ 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:30.590 * Looking for test storage... 00:05:30.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:30.590 22:07:02 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1668 -- # local nvme bdf 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme1n2 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme1n3 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:30.590 22:07:02 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:30.590 22:07:02 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:30.590 22:07:02 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:30.590 22:07:02 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:30.590 22:07:02 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:30.590 22:07:02 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:30.590 22:07:02 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:30.590 22:07:02 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.523 22:07:03 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:31.523 22:07:03 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:31.523 22:07:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.523 22:07:03 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:31.523 22:07:03 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.523 22:07:03 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:32.087 22:07:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:32.087 22:07:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:32.087 22:07:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.087 Hugepages 00:05:32.087 node hugesize free / total 00:05:32.087 22:07:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:32.087 22:07:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:32.087 22:07:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.087 00:05:32.087 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:32.087 22:07:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:32.087 22:07:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:32.087 22:07:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.087 22:07:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:32.087 22:07:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:32.087 22:07:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:32.087 22:07:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.344 22:07:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:32.344 22:07:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:32.344 22:07:04 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:32.344 22:07:04 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:32.344 22:07:04 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:32.344 22:07:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.344 22:07:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:32.344 22:07:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:32.344 22:07:04 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:32.344 22:07:04 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:32.344 22:07:04 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:32.344 22:07:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.344 22:07:04 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:32.344 22:07:04 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:32.344 22:07:04 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.344 22:07:04 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.344 22:07:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:32.344 ************************************ 00:05:32.344 START TEST denied 00:05:32.344 ************************************ 00:05:32.344 22:07:04 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:32.344 22:07:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:32.344 22:07:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:32.344 22:07:04 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:32.344 22:07:04 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.344 22:07:04 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:33.277 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:33.277 22:07:05 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:33.277 22:07:05 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:33.277 22:07:05 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:33.277 22:07:05 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:33.277 22:07:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:33.535 22:07:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:33.535 22:07:05 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:33.535 22:07:05 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:33.535 22:07:05 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:33.535 22:07:05 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:34.115 00:05:34.115 real 0m1.696s 00:05:34.115 user 0m0.619s 00:05:34.115 sys 0m1.050s 00:05:34.115 22:07:06 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.115 22:07:06 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:34.115 ************************************ 00:05:34.115 END TEST denied 00:05:34.115 ************************************ 00:05:34.115 22:07:06 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:34.115 22:07:06 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.115 22:07:06 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.115 22:07:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:34.115 ************************************ 00:05:34.115 START TEST allowed 00:05:34.115 ************************************ 00:05:34.115 22:07:06 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:34.115 22:07:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:34.115 22:07:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:34.115 22:07:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.115 22:07:06 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:34.115 22:07:06 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:35.047 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.047 22:07:07 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:35.047 22:07:07 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:35.047 22:07:07 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:35.047 22:07:07 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:35.047 22:07:07 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:35.047 22:07:07 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:35.047 22:07:07 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:35.047 22:07:07 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:35.047 22:07:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:35.047 22:07:07 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:35.981 00:05:35.981 real 0m1.787s 00:05:35.981 user 0m0.711s 00:05:35.981 sys 0m1.084s 00:05:35.981 22:07:08 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.981 22:07:08 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:35.981 ************************************ 00:05:35.981 END TEST allowed 00:05:35.981 ************************************ 00:05:35.981 ************************************ 00:05:35.981 END TEST acl 00:05:35.981 ************************************ 00:05:35.981 00:05:35.981 real 0m5.600s 00:05:35.981 user 0m2.277s 00:05:35.981 sys 0m3.327s 00:05:35.981 22:07:08 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.981 22:07:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:35.981 22:07:08 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:35.981 22:07:08 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.981 22:07:08 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.981 22:07:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:35.981 ************************************ 00:05:35.981 START TEST hugepages 00:05:35.981 ************************************ 00:05:35.981 22:07:08 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:36.241 * Looking for test storage... 00:05:36.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 4569748 kB' 'MemAvailable: 7360392 kB' 'Buffers: 2436 kB' 'Cached: 2994200 kB' 'SwapCached: 0 kB' 'Active: 436360 kB' 'Inactive: 2665260 kB' 'Active(anon): 115476 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 106696 kB' 'Mapped: 48900 kB' 'Shmem: 10492 kB' 'KReclaimable: 82880 kB' 'Slab: 163180 kB' 'SReclaimable: 82880 kB' 'SUnreclaim: 80300 kB' 'KernelStack: 6648 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 346900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55316 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.241 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.242 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:36.243 22:07:08 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:36.243 22:07:08 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.243 22:07:08 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.243 22:07:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:36.243 ************************************ 00:05:36.243 START TEST default_setup 00:05:36.243 ************************************ 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.243 22:07:08 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.071 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.071 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.071 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6663012 kB' 'MemAvailable: 9453524 kB' 'Buffers: 2436 kB' 'Cached: 2994188 kB' 'SwapCached: 0 kB' 'Active: 453120 kB' 'Inactive: 2665264 kB' 'Active(anon): 132236 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123380 kB' 'Mapped: 48904 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162884 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80280 kB' 'KernelStack: 6560 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.072 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:37.073 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:37.336 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:37.336 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.336 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:37.336 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:37.336 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:37.336 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.336 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6663712 kB' 'MemAvailable: 9454224 kB' 'Buffers: 2436 kB' 'Cached: 2994188 kB' 'SwapCached: 0 kB' 'Active: 452876 kB' 'Inactive: 2665264 kB' 'Active(anon): 131992 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123152 kB' 'Mapped: 48812 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162872 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80268 kB' 'KernelStack: 6560 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.337 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.338 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6663712 kB' 'MemAvailable: 9454228 kB' 'Buffers: 2436 kB' 'Cached: 2994188 kB' 'SwapCached: 0 kB' 'Active: 452828 kB' 'Inactive: 2665268 kB' 'Active(anon): 131944 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665268 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123096 kB' 'Mapped: 48812 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162872 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80268 kB' 'KernelStack: 6560 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.339 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:37.340 nr_hugepages=1024 00:05:37.340 resv_hugepages=0 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:37.340 surplus_hugepages=0 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:37.340 anon_hugepages=0 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.340 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6663712 kB' 'MemAvailable: 9454228 kB' 'Buffers: 2436 kB' 'Cached: 2994188 kB' 'SwapCached: 0 kB' 'Active: 452772 kB' 'Inactive: 2665268 kB' 'Active(anon): 131888 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665268 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123040 kB' 'Mapped: 48812 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162872 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80268 kB' 'KernelStack: 6560 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.341 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:37.342 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6663712 kB' 'MemUsed: 5578260 kB' 'SwapCached: 0 kB' 'Active: 452988 kB' 'Inactive: 2665268 kB' 'Active(anon): 132104 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665268 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2996624 kB' 'Mapped: 48812 kB' 'AnonPages: 123000 kB' 'Shmem: 10468 kB' 'KernelStack: 6560 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82604 kB' 'Slab: 162872 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.343 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:37.344 node0=1024 expecting 1024 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:37.344 00:05:37.344 real 0m1.144s 00:05:37.344 user 0m0.517s 00:05:37.344 sys 0m0.602s 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.344 22:07:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:37.344 ************************************ 00:05:37.344 END TEST default_setup 00:05:37.344 ************************************ 00:05:37.344 22:07:09 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:37.344 22:07:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.344 22:07:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.344 22:07:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:37.344 ************************************ 00:05:37.344 START TEST per_node_1G_alloc 00:05:37.344 ************************************ 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.344 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.915 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.915 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7719348 kB' 'MemAvailable: 10509868 kB' 'Buffers: 2436 kB' 'Cached: 2994188 kB' 'SwapCached: 0 kB' 'Active: 453160 kB' 'Inactive: 2665272 kB' 'Active(anon): 132276 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123380 kB' 'Mapped: 48940 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162916 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80312 kB' 'KernelStack: 6568 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.915 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.916 22:07:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7719348 kB' 'MemAvailable: 10509868 kB' 'Buffers: 2436 kB' 'Cached: 2994188 kB' 'SwapCached: 0 kB' 'Active: 452960 kB' 'Inactive: 2665272 kB' 'Active(anon): 132076 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123184 kB' 'Mapped: 48812 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162900 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80296 kB' 'KernelStack: 6544 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.916 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7719348 kB' 'MemAvailable: 10509868 kB' 'Buffers: 2436 kB' 'Cached: 2994188 kB' 'SwapCached: 0 kB' 'Active: 452812 kB' 'Inactive: 2665272 kB' 'Active(anon): 131928 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123052 kB' 'Mapped: 48812 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162900 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80296 kB' 'KernelStack: 6560 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.917 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:37.918 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:37.919 nr_hugepages=512 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:37.919 resv_hugepages=0 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:37.919 surplus_hugepages=0 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:37.919 anon_hugepages=0 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7719348 kB' 'MemAvailable: 10509868 kB' 'Buffers: 2436 kB' 'Cached: 2994188 kB' 'SwapCached: 0 kB' 'Active: 453028 kB' 'Inactive: 2665272 kB' 'Active(anon): 132144 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123268 kB' 'Mapped: 48812 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162900 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80296 kB' 'KernelStack: 6544 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.919 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7719348 kB' 'MemUsed: 4522624 kB' 'SwapCached: 0 kB' 'Active: 453044 kB' 'Inactive: 2665272 kB' 'Active(anon): 132160 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 2996624 kB' 'Mapped: 48812 kB' 'AnonPages: 123316 kB' 'Shmem: 10468 kB' 'KernelStack: 6560 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82604 kB' 'Slab: 162900 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.179 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:38.180 node0=512 expecting 512 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:38.180 00:05:38.180 real 0m0.642s 00:05:38.180 user 0m0.267s 00:05:38.180 sys 0m0.390s 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.180 22:07:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:38.180 ************************************ 00:05:38.180 END TEST per_node_1G_alloc 00:05:38.180 ************************************ 00:05:38.180 22:07:10 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:38.180 22:07:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.180 22:07:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.180 22:07:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:38.180 ************************************ 00:05:38.180 START TEST even_2G_alloc 00:05:38.180 ************************************ 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.180 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:38.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.439 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:38.439 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:38.439 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:38.439 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:38.439 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:38.439 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:38.439 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:38.439 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:38.439 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:38.439 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6670836 kB' 'MemAvailable: 9461360 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 453184 kB' 'Inactive: 2665276 kB' 'Active(anon): 132300 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123404 kB' 'Mapped: 48928 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162928 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80324 kB' 'KernelStack: 6568 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55316 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.704 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6670836 kB' 'MemAvailable: 9461360 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 452900 kB' 'Inactive: 2665276 kB' 'Active(anon): 132016 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123124 kB' 'Mapped: 48812 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162928 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80324 kB' 'KernelStack: 6544 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.705 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.706 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6670836 kB' 'MemAvailable: 9461360 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 452900 kB' 'Inactive: 2665276 kB' 'Active(anon): 132016 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123124 kB' 'Mapped: 48812 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162928 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80324 kB' 'KernelStack: 6544 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.707 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.708 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:38.709 nr_hugepages=1024 00:05:38.709 resv_hugepages=0 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:38.709 surplus_hugepages=0 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:38.709 anon_hugepages=0 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6670836 kB' 'MemAvailable: 9461360 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 452828 kB' 'Inactive: 2665276 kB' 'Active(anon): 131944 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123052 kB' 'Mapped: 48812 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162928 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80324 kB' 'KernelStack: 6528 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.709 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.710 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6670836 kB' 'MemUsed: 5571136 kB' 'SwapCached: 0 kB' 'Active: 453132 kB' 'Inactive: 2665276 kB' 'Active(anon): 132248 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 2996628 kB' 'Mapped: 48812 kB' 'AnonPages: 123356 kB' 'Shmem: 10468 kB' 'KernelStack: 6560 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82604 kB' 'Slab: 162928 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.711 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:38.712 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:38.713 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:38.713 node0=1024 expecting 1024 00:05:38.713 22:07:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:38.713 00:05:38.713 real 0m0.614s 00:05:38.713 user 0m0.290s 00:05:38.713 sys 0m0.370s 00:05:38.713 22:07:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.713 22:07:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:38.713 ************************************ 00:05:38.713 END TEST even_2G_alloc 00:05:38.713 ************************************ 00:05:38.713 22:07:10 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:38.713 22:07:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.713 22:07:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.713 22:07:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:38.713 ************************************ 00:05:38.713 START TEST odd_alloc 00:05:38.713 ************************************ 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.713 22:07:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.287 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.287 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.287 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6665208 kB' 'MemAvailable: 9455732 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 453304 kB' 'Inactive: 2665276 kB' 'Active(anon): 132420 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123540 kB' 'Mapped: 48900 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162912 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80308 kB' 'KernelStack: 6532 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 364076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.287 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.288 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6665208 kB' 'MemAvailable: 9455732 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 453256 kB' 'Inactive: 2665276 kB' 'Active(anon): 132372 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 123496 kB' 'Mapped: 48900 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162912 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80308 kB' 'KernelStack: 6516 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 364076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.289 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.290 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6665208 kB' 'MemAvailable: 9455732 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 453272 kB' 'Inactive: 2665276 kB' 'Active(anon): 132388 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 123524 kB' 'Mapped: 48900 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162912 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80308 kB' 'KernelStack: 6584 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 364076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.292 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:39.293 nr_hugepages=1025 00:05:39.293 resv_hugepages=0 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:39.293 surplus_hugepages=0 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:39.293 anon_hugepages=0 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6665208 kB' 'MemAvailable: 9455732 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 453124 kB' 'Inactive: 2665276 kB' 'Active(anon): 132240 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 123416 kB' 'Mapped: 48900 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162912 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80308 kB' 'KernelStack: 6552 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 364076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:39.294 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6664956 kB' 'MemUsed: 5577016 kB' 'SwapCached: 0 kB' 'Active: 453072 kB' 'Inactive: 2665276 kB' 'Active(anon): 132188 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 2996628 kB' 'Mapped: 48812 kB' 'AnonPages: 123352 kB' 'Shmem: 10468 kB' 'KernelStack: 6560 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82604 kB' 'Slab: 162876 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.295 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:39.296 node0=1025 expecting 1025 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:39.296 00:05:39.296 real 0m0.607s 00:05:39.296 user 0m0.290s 00:05:39.296 sys 0m0.353s 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.296 22:07:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:39.296 ************************************ 00:05:39.296 END TEST odd_alloc 00:05:39.296 ************************************ 00:05:39.555 22:07:11 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:39.555 22:07:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.555 22:07:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.555 22:07:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:39.555 ************************************ 00:05:39.555 START TEST custom_alloc 00:05:39.555 ************************************ 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:39.555 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.556 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.816 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.816 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7715896 kB' 'MemAvailable: 10506420 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 453296 kB' 'Inactive: 2665276 kB' 'Active(anon): 132412 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 123332 kB' 'Mapped: 48944 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162896 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80292 kB' 'KernelStack: 6584 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 364076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.816 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.817 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7715896 kB' 'MemAvailable: 10506420 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 453068 kB' 'Inactive: 2665276 kB' 'Active(anon): 132184 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 123348 kB' 'Mapped: 48816 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162912 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80308 kB' 'KernelStack: 6560 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 364076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.081 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.082 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7715896 kB' 'MemAvailable: 10506420 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 453020 kB' 'Inactive: 2665276 kB' 'Active(anon): 132136 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 123268 kB' 'Mapped: 48816 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162912 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80308 kB' 'KernelStack: 6544 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 364076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.083 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:40.084 nr_hugepages=512 00:05:40.084 resv_hugepages=0 00:05:40.084 surplus_hugepages=0 00:05:40.084 anon_hugepages=0 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.084 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7715896 kB' 'MemAvailable: 10506420 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 452988 kB' 'Inactive: 2665276 kB' 'Active(anon): 132104 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 123228 kB' 'Mapped: 48816 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162908 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80304 kB' 'KernelStack: 6544 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 364076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.086 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7715896 kB' 'MemUsed: 4526076 kB' 'SwapCached: 0 kB' 'Active: 452984 kB' 'Inactive: 2665276 kB' 'Active(anon): 132100 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 2996628 kB' 'Mapped: 48816 kB' 'AnonPages: 123232 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82604 kB' 'Slab: 162904 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.087 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.088 node0=512 expecting 512 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:40.088 00:05:40.088 real 0m0.655s 00:05:40.088 user 0m0.301s 00:05:40.088 sys 0m0.377s 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.088 ************************************ 00:05:40.088 END TEST custom_alloc 00:05:40.088 ************************************ 00:05:40.088 22:07:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:40.088 22:07:12 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:40.088 22:07:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.088 22:07:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.088 22:07:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:40.088 ************************************ 00:05:40.088 START TEST no_shrink_alloc 00:05:40.088 ************************************ 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.088 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.662 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.662 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.662 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6670868 kB' 'MemAvailable: 9461392 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 453348 kB' 'Inactive: 2665276 kB' 'Active(anon): 132464 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 123556 kB' 'Mapped: 48932 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162876 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80272 kB' 'KernelStack: 6552 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 364076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55300 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.662 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6670868 kB' 'MemAvailable: 9461392 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 453160 kB' 'Inactive: 2665276 kB' 'Active(anon): 132276 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 123380 kB' 'Mapped: 48816 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162876 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80272 kB' 'KernelStack: 6560 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 364076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.663 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.664 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6670868 kB' 'MemAvailable: 9461392 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 453068 kB' 'Inactive: 2665276 kB' 'Active(anon): 132184 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 123328 kB' 'Mapped: 48816 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162876 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80272 kB' 'KernelStack: 6560 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 364076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.665 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.666 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:40.667 nr_hugepages=1024 00:05:40.667 resv_hugepages=0 00:05:40.667 surplus_hugepages=0 00:05:40.667 anon_hugepages=0 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.667 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6670868 kB' 'MemAvailable: 9461392 kB' 'Buffers: 2436 kB' 'Cached: 2994192 kB' 'SwapCached: 0 kB' 'Active: 453184 kB' 'Inactive: 2665276 kB' 'Active(anon): 132300 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 123400 kB' 'Mapped: 48816 kB' 'Shmem: 10468 kB' 'KReclaimable: 82604 kB' 'Slab: 162872 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80268 kB' 'KernelStack: 6528 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 364076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.668 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6670868 kB' 'MemUsed: 5571104 kB' 'SwapCached: 0 kB' 'Active: 453568 kB' 'Inactive: 2665276 kB' 'Active(anon): 132684 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'FilePages: 2996628 kB' 'Mapped: 48816 kB' 'AnonPages: 123848 kB' 'Shmem: 10468 kB' 'KernelStack: 6576 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82604 kB' 'Slab: 162872 kB' 'SReclaimable: 82604 kB' 'SUnreclaim: 80268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.669 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.670 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.930 node0=1024 expecting 1024 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.930 22:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.191 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.191 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.191 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.191 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:41.191 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:41.191 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:41.191 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:41.191 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:41.191 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:41.191 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:41.191 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:41.191 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:41.191 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:41.191 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:41.191 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6672140 kB' 'MemAvailable: 9462664 kB' 'Buffers: 2436 kB' 'Cached: 2994196 kB' 'SwapCached: 0 kB' 'Active: 448324 kB' 'Inactive: 2665280 kB' 'Active(anon): 127440 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118676 kB' 'Mapped: 48240 kB' 'Shmem: 10468 kB' 'KReclaimable: 82600 kB' 'Slab: 162620 kB' 'SReclaimable: 82600 kB' 'SUnreclaim: 80020 kB' 'KernelStack: 6500 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.192 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6672156 kB' 'MemAvailable: 9462680 kB' 'Buffers: 2436 kB' 'Cached: 2994196 kB' 'SwapCached: 0 kB' 'Active: 447944 kB' 'Inactive: 2665280 kB' 'Active(anon): 127060 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118224 kB' 'Mapped: 48072 kB' 'Shmem: 10468 kB' 'KReclaimable: 82600 kB' 'Slab: 162620 kB' 'SReclaimable: 82600 kB' 'SUnreclaim: 80020 kB' 'KernelStack: 6420 kB' 'PageTables: 3520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.193 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.194 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.195 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.456 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6672612 kB' 'MemAvailable: 9463136 kB' 'Buffers: 2436 kB' 'Cached: 2994196 kB' 'SwapCached: 0 kB' 'Active: 448148 kB' 'Inactive: 2665280 kB' 'Active(anon): 127264 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118428 kB' 'Mapped: 48072 kB' 'Shmem: 10468 kB' 'KReclaimable: 82600 kB' 'Slab: 162620 kB' 'SReclaimable: 82600 kB' 'SUnreclaim: 80020 kB' 'KernelStack: 6404 kB' 'PageTables: 3472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.457 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:41.458 nr_hugepages=1024 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:41.458 resv_hugepages=0 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:41.458 surplus_hugepages=0 00:05:41.458 anon_hugepages=0 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:41.458 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6672612 kB' 'MemAvailable: 9463136 kB' 'Buffers: 2436 kB' 'Cached: 2994196 kB' 'SwapCached: 0 kB' 'Active: 448104 kB' 'Inactive: 2665280 kB' 'Active(anon): 127220 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118324 kB' 'Mapped: 48072 kB' 'Shmem: 10468 kB' 'KReclaimable: 82600 kB' 'Slab: 162620 kB' 'SReclaimable: 82600 kB' 'SUnreclaim: 80020 kB' 'KernelStack: 6432 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.459 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.460 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6672612 kB' 'MemUsed: 5569360 kB' 'SwapCached: 0 kB' 'Active: 448112 kB' 'Inactive: 2665280 kB' 'Active(anon): 127228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320884 kB' 'Inactive(file): 2665280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 2996632 kB' 'Mapped: 48072 kB' 'AnonPages: 118324 kB' 'Shmem: 10468 kB' 'KernelStack: 6432 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82600 kB' 'Slab: 162620 kB' 'SReclaimable: 82600 kB' 'SUnreclaim: 80020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:41.461 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.461 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.461 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.461 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.461 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.462 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:41.463 node0=1024 expecting 1024 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:41.463 00:05:41.463 real 0m1.293s 00:05:41.463 user 0m0.572s 00:05:41.463 sys 0m0.756s 00:05:41.463 ************************************ 00:05:41.463 END TEST no_shrink_alloc 00:05:41.463 ************************************ 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.463 22:07:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:41.463 22:07:13 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:41.463 22:07:13 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:41.463 22:07:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:41.463 22:07:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:41.463 22:07:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:41.463 22:07:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:41.463 22:07:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:41.463 22:07:13 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:41.463 22:07:13 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:41.463 00:05:41.463 real 0m5.463s 00:05:41.463 user 0m2.407s 00:05:41.463 sys 0m3.186s 00:05:41.463 22:07:13 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.463 ************************************ 00:05:41.463 END TEST hugepages 00:05:41.463 ************************************ 00:05:41.463 22:07:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:41.463 22:07:13 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:41.463 22:07:13 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.463 22:07:13 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.463 22:07:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:41.463 ************************************ 00:05:41.463 START TEST driver 00:05:41.463 ************************************ 00:05:41.463 22:07:13 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:41.721 * Looking for test storage... 00:05:41.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:41.722 22:07:13 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:41.722 22:07:13 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:41.722 22:07:13 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:42.290 22:07:14 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:42.290 22:07:14 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.290 22:07:14 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.290 22:07:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:42.290 ************************************ 00:05:42.290 START TEST guess_driver 00:05:42.290 ************************************ 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:42.290 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:42.290 Looking for driver=uio_pci_generic 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.290 22:07:14 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:43.228 22:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:43.228 22:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:43.228 22:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:43.228 22:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:43.228 22:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:43.228 22:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:43.228 22:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:43.228 22:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:43.228 22:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:43.487 22:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:43.487 22:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:43.487 22:07:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:43.487 22:07:15 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:44.056 00:05:44.056 real 0m1.723s 00:05:44.056 user 0m0.611s 00:05:44.056 sys 0m1.158s 00:05:44.056 22:07:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.056 22:07:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:44.056 ************************************ 00:05:44.056 END TEST guess_driver 00:05:44.056 ************************************ 00:05:44.056 ************************************ 00:05:44.056 END TEST driver 00:05:44.056 ************************************ 00:05:44.056 00:05:44.056 real 0m2.589s 00:05:44.056 user 0m0.887s 00:05:44.056 sys 0m1.815s 00:05:44.056 22:07:16 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.056 22:07:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:44.315 22:07:16 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:44.315 22:07:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.315 22:07:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.315 22:07:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:44.315 ************************************ 00:05:44.315 START TEST devices 00:05:44.315 ************************************ 00:05:44.315 22:07:16 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:44.315 * Looking for test storage... 00:05:44.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:44.315 22:07:16 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:44.315 22:07:16 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:44.315 22:07:16 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:44.315 22:07:16 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:45.253 22:07:17 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1668 -- # local nvme bdf 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n2 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n3 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:45.253 22:07:17 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:05:45.253 22:07:17 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:45.253 22:07:17 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:45.253 22:07:17 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:45.253 22:07:17 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:45.253 22:07:17 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:45.253 22:07:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:45.253 22:07:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:45.253 22:07:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:45.253 22:07:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:45.254 22:07:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:45.254 22:07:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:45.254 No valid GPT data, bailing 00:05:45.254 22:07:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:45.254 22:07:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:45.254 22:07:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:45.254 22:07:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:45.254 22:07:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:45.254 22:07:17 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:45.254 22:07:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:45.254 22:07:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:45.254 No valid GPT data, bailing 00:05:45.254 22:07:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:45.254 22:07:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:45.254 22:07:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:45.254 22:07:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:45.254 22:07:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:45.254 22:07:17 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:45.254 22:07:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:45.254 22:07:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:45.254 22:07:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:45.514 No valid GPT data, bailing 00:05:45.514 22:07:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:45.514 22:07:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:45.514 22:07:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:45.514 22:07:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:45.514 22:07:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:45.514 22:07:17 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:45.514 22:07:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:45.514 22:07:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:45.514 No valid GPT data, bailing 00:05:45.514 22:07:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:45.514 22:07:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:45.514 22:07:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:45.514 22:07:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:45.514 22:07:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:45.514 22:07:17 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:45.514 22:07:17 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:45.514 22:07:17 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.514 22:07:17 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.514 22:07:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:45.514 ************************************ 00:05:45.514 START TEST nvme_mount 00:05:45.514 ************************************ 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:45.514 22:07:17 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:46.453 Creating new GPT entries in memory. 00:05:46.453 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:46.453 other utilities. 00:05:46.453 22:07:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:46.453 22:07:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.453 22:07:18 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:46.453 22:07:18 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:46.453 22:07:18 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:47.832 Creating new GPT entries in memory. 00:05:47.832 The operation has completed successfully. 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 70505 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.832 22:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.092 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.092 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.092 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.092 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.350 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:48.351 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:48.351 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.351 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:48.351 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:48.351 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:48.351 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.351 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.351 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:48.351 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:48.351 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:48.351 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:48.351 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:48.610 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:48.610 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:48.610 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:48.610 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.610 22:07:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:48.869 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.869 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:48.869 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:48.869 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.869 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.869 22:07:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.128 22:07:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:49.695 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.695 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:49.695 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:49.695 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.695 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.695 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.695 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.695 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.695 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.695 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.954 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.954 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:49.954 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:49.954 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:49.955 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.955 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.955 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:49.955 22:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:49.955 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:49.955 00:05:49.955 real 0m4.352s 00:05:49.955 user 0m0.790s 00:05:49.955 sys 0m1.315s 00:05:49.955 22:07:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.955 22:07:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:49.955 ************************************ 00:05:49.955 END TEST nvme_mount 00:05:49.955 ************************************ 00:05:49.955 22:07:22 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:49.955 22:07:22 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.955 22:07:22 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.955 22:07:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:49.955 ************************************ 00:05:49.955 START TEST dm_mount 00:05:49.955 ************************************ 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:49.955 22:07:22 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:50.893 Creating new GPT entries in memory. 00:05:50.893 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:50.893 other utilities. 00:05:50.893 22:07:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:50.893 22:07:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:50.893 22:07:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:50.893 22:07:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:50.893 22:07:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:52.268 Creating new GPT entries in memory. 00:05:52.268 The operation has completed successfully. 00:05:52.268 22:07:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:52.268 22:07:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:52.268 22:07:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:52.268 22:07:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:52.268 22:07:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:53.205 The operation has completed successfully. 00:05:53.205 22:07:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:53.205 22:07:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:53.205 22:07:25 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 70942 00:05:53.205 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.206 22:07:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:53.465 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.465 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:53.465 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:53.465 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.465 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.465 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.465 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.465 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.723 22:07:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:53.982 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.982 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:53.982 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:53.982 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.982 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.982 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.241 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.241 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.241 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.241 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.241 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:54.241 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:54.241 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:54.241 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:54.241 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.241 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:54.241 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:54.500 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:54.500 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:54.500 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:54.500 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:54.500 22:07:26 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:54.500 00:05:54.500 real 0m4.450s 00:05:54.500 user 0m0.519s 00:05:54.500 sys 0m0.914s 00:05:54.500 22:07:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.500 ************************************ 00:05:54.500 END TEST dm_mount 00:05:54.500 ************************************ 00:05:54.500 22:07:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:54.500 22:07:26 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:54.500 22:07:26 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:54.500 22:07:26 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:54.500 22:07:26 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:54.500 22:07:26 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:54.500 22:07:26 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:54.500 22:07:26 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:54.760 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:54.760 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:54.760 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:54.760 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:54.760 22:07:26 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:54.760 22:07:26 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.760 22:07:26 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:54.760 22:07:26 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:54.760 22:07:26 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:54.760 22:07:26 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:54.760 22:07:26 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:54.760 00:05:54.760 real 0m10.527s 00:05:54.760 user 0m2.019s 00:05:54.760 sys 0m2.958s 00:05:54.760 22:07:26 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.760 ************************************ 00:05:54.760 END TEST devices 00:05:54.760 ************************************ 00:05:54.760 22:07:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:54.760 00:05:54.760 real 0m24.526s 00:05:54.760 user 0m7.703s 00:05:54.760 sys 0m11.521s 00:05:54.760 22:07:26 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.760 22:07:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:54.760 ************************************ 00:05:54.760 END TEST setup.sh 00:05:54.760 ************************************ 00:05:54.760 22:07:26 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:55.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:55.719 Hugepages 00:05:55.719 node hugesize free / total 00:05:55.719 node0 1048576kB 0 / 0 00:05:55.719 node0 2048kB 2048 / 2048 00:05:55.719 00:05:55.719 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:55.719 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:55.719 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:55.978 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:55.978 22:07:27 -- spdk/autotest.sh@130 -- # uname -s 00:05:55.978 22:07:27 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:55.978 22:07:27 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:55.978 22:07:27 -- common/autotest_common.sh@1529 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:56.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:56.805 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:56.805 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:56.805 22:07:28 -- common/autotest_common.sh@1530 -- # sleep 1 00:05:58.182 22:07:29 -- common/autotest_common.sh@1531 -- # bdfs=() 00:05:58.182 22:07:29 -- common/autotest_common.sh@1531 -- # local bdfs 00:05:58.182 22:07:29 -- common/autotest_common.sh@1532 -- # bdfs=($(get_nvme_bdfs)) 00:05:58.182 22:07:29 -- common/autotest_common.sh@1532 -- # get_nvme_bdfs 00:05:58.182 22:07:29 -- common/autotest_common.sh@1511 -- # bdfs=() 00:05:58.182 22:07:29 -- common/autotest_common.sh@1511 -- # local bdfs 00:05:58.182 22:07:29 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:58.182 22:07:29 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:58.182 22:07:29 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:05:58.182 22:07:30 -- common/autotest_common.sh@1513 -- # (( 2 == 0 )) 00:05:58.182 22:07:30 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:58.182 22:07:30 -- common/autotest_common.sh@1534 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:58.441 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.441 Waiting for block devices as requested 00:05:58.441 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:58.699 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:58.699 22:07:30 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:05:58.699 22:07:30 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:58.699 22:07:30 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:58.699 22:07:30 -- common/autotest_common.sh@1500 -- # grep 0000:00:10.0/nvme/nvme 00:05:58.699 22:07:30 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:58.699 22:07:30 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:58.699 22:07:30 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:58.699 22:07:30 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme1 00:05:58.699 22:07:30 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme1 00:05:58.699 22:07:30 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme1 ]] 00:05:58.699 22:07:30 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme1 00:05:58.699 22:07:30 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:05:58.699 22:07:30 -- common/autotest_common.sh@1543 -- # grep oacs 00:05:58.699 22:07:30 -- common/autotest_common.sh@1543 -- # oacs=' 0x12a' 00:05:58.699 22:07:30 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:05:58.699 22:07:30 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:05:58.699 22:07:30 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme1 00:05:58.699 22:07:30 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:05:58.699 22:07:30 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:05:58.699 22:07:30 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:05:58.699 22:07:30 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:05:58.699 22:07:30 -- common/autotest_common.sh@1555 -- # continue 00:05:58.699 22:07:30 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:05:58.699 22:07:30 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:58.699 22:07:30 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:58.699 22:07:30 -- common/autotest_common.sh@1500 -- # grep 0000:00:11.0/nvme/nvme 00:05:58.700 22:07:30 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:58.700 22:07:30 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:58.700 22:07:30 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:58.700 22:07:30 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme0 00:05:58.700 22:07:30 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme0 00:05:58.700 22:07:30 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme0 ]] 00:05:58.700 22:07:30 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme0 00:05:58.700 22:07:30 -- common/autotest_common.sh@1543 -- # grep oacs 00:05:58.700 22:07:30 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:05:58.700 22:07:30 -- common/autotest_common.sh@1543 -- # oacs=' 0x12a' 00:05:58.700 22:07:30 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:05:58.700 22:07:30 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:05:58.700 22:07:30 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme0 00:05:58.700 22:07:30 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:05:58.700 22:07:30 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:05:58.700 22:07:30 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:05:58.700 22:07:30 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:05:58.700 22:07:30 -- common/autotest_common.sh@1555 -- # continue 00:05:58.700 22:07:30 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:58.700 22:07:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.700 22:07:30 -- common/autotest_common.sh@10 -- # set +x 00:05:58.700 22:07:30 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:58.700 22:07:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.700 22:07:30 -- common/autotest_common.sh@10 -- # set +x 00:05:58.700 22:07:30 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:59.637 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:59.637 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:59.637 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:59.897 22:07:31 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:59.897 22:07:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.897 22:07:31 -- common/autotest_common.sh@10 -- # set +x 00:05:59.897 22:07:31 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:59.897 22:07:31 -- common/autotest_common.sh@1589 -- # mapfile -t bdfs 00:05:59.897 22:07:31 -- common/autotest_common.sh@1589 -- # get_nvme_bdfs_by_id 0x0a54 00:05:59.897 22:07:31 -- common/autotest_common.sh@1575 -- # bdfs=() 00:05:59.897 22:07:31 -- common/autotest_common.sh@1575 -- # local bdfs 00:05:59.897 22:07:31 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs 00:05:59.897 22:07:31 -- common/autotest_common.sh@1511 -- # bdfs=() 00:05:59.897 22:07:31 -- common/autotest_common.sh@1511 -- # local bdfs 00:05:59.897 22:07:31 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:59.897 22:07:31 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:59.897 22:07:31 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:05:59.897 22:07:31 -- common/autotest_common.sh@1513 -- # (( 2 == 0 )) 00:05:59.897 22:07:31 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:59.897 22:07:31 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:05:59.897 22:07:31 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:59.897 22:07:31 -- common/autotest_common.sh@1578 -- # device=0x0010 00:05:59.897 22:07:31 -- common/autotest_common.sh@1579 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:59.897 22:07:31 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:05:59.897 22:07:31 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:59.897 22:07:31 -- common/autotest_common.sh@1578 -- # device=0x0010 00:05:59.897 22:07:31 -- common/autotest_common.sh@1579 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:59.897 22:07:31 -- common/autotest_common.sh@1584 -- # printf '%s\n' 00:05:59.897 22:07:31 -- common/autotest_common.sh@1590 -- # [[ -z '' ]] 00:05:59.897 22:07:31 -- common/autotest_common.sh@1591 -- # return 0 00:05:59.897 22:07:31 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:59.897 22:07:31 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:59.897 22:07:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:59.897 22:07:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:59.897 22:07:31 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:59.897 22:07:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.897 22:07:31 -- common/autotest_common.sh@10 -- # set +x 00:05:59.897 22:07:31 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:05:59.897 22:07:31 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:59.897 22:07:31 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:59.897 22:07:31 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:59.897 22:07:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.897 22:07:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.897 22:07:31 -- common/autotest_common.sh@10 -- # set +x 00:05:59.897 ************************************ 00:05:59.897 START TEST env 00:05:59.897 ************************************ 00:05:59.897 22:07:32 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:00.157 * Looking for test storage... 00:06:00.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:00.157 22:07:32 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:00.157 22:07:32 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.157 22:07:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.157 22:07:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.157 ************************************ 00:06:00.157 START TEST env_memory 00:06:00.157 ************************************ 00:06:00.157 22:07:32 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:00.157 00:06:00.157 00:06:00.157 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.157 http://cunit.sourceforge.net/ 00:06:00.157 00:06:00.157 00:06:00.157 Suite: memory 00:06:00.157 Test: alloc and free memory map ...[2024-07-23 22:07:32.167071] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:00.157 passed 00:06:00.157 Test: mem map translation ...[2024-07-23 22:07:32.200546] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:00.157 [2024-07-23 22:07:32.200608] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:00.157 [2024-07-23 22:07:32.200668] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:00.157 [2024-07-23 22:07:32.200680] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:00.157 passed 00:06:00.157 Test: mem map registration ...[2024-07-23 22:07:32.265771] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:00.157 [2024-07-23 22:07:32.265827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:00.157 passed 00:06:00.417 Test: mem map adjacent registrations ...passed 00:06:00.417 00:06:00.417 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.417 suites 1 1 n/a 0 0 00:06:00.417 tests 4 4 4 0 0 00:06:00.417 asserts 152 152 152 0 n/a 00:06:00.417 00:06:00.417 Elapsed time = 0.223 seconds 00:06:00.417 00:06:00.417 real 0m0.240s 00:06:00.417 user 0m0.219s 00:06:00.417 sys 0m0.017s 00:06:00.417 22:07:32 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.417 22:07:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:00.417 ************************************ 00:06:00.417 END TEST env_memory 00:06:00.417 ************************************ 00:06:00.417 22:07:32 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:00.417 22:07:32 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.417 22:07:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.417 22:07:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.417 ************************************ 00:06:00.417 START TEST env_vtophys 00:06:00.417 ************************************ 00:06:00.417 22:07:32 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:00.417 EAL: lib.eal log level changed from notice to debug 00:06:00.417 EAL: Detected lcore 0 as core 0 on socket 0 00:06:00.417 EAL: Detected lcore 1 as core 0 on socket 0 00:06:00.417 EAL: Detected lcore 2 as core 0 on socket 0 00:06:00.417 EAL: Detected lcore 3 as core 0 on socket 0 00:06:00.417 EAL: Detected lcore 4 as core 0 on socket 0 00:06:00.417 EAL: Detected lcore 5 as core 0 on socket 0 00:06:00.417 EAL: Detected lcore 6 as core 0 on socket 0 00:06:00.417 EAL: Detected lcore 7 as core 0 on socket 0 00:06:00.417 EAL: Detected lcore 8 as core 0 on socket 0 00:06:00.417 EAL: Detected lcore 9 as core 0 on socket 0 00:06:00.417 EAL: Maximum logical cores by configuration: 128 00:06:00.417 EAL: Detected CPU lcores: 10 00:06:00.417 EAL: Detected NUMA nodes: 1 00:06:00.417 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:06:00.417 EAL: Detected shared linkage of DPDK 00:06:00.417 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:06:00.417 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:06:00.417 EAL: Registered [vdev] bus. 00:06:00.417 EAL: bus.vdev log level changed from disabled to notice 00:06:00.417 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:06:00.417 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:06:00.417 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:00.417 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:00.417 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:06:00.417 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:06:00.417 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:06:00.417 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:06:00.417 EAL: No shared files mode enabled, IPC will be disabled 00:06:00.417 EAL: No shared files mode enabled, IPC is disabled 00:06:00.417 EAL: Selected IOVA mode 'PA' 00:06:00.417 EAL: Probing VFIO support... 00:06:00.417 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:00.417 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:00.417 EAL: Ask a virtual area of 0x2e000 bytes 00:06:00.417 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:00.417 EAL: Setting up physically contiguous memory... 00:06:00.417 EAL: Setting maximum number of open files to 524288 00:06:00.417 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:00.417 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:00.417 EAL: Ask a virtual area of 0x61000 bytes 00:06:00.417 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:00.417 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:00.417 EAL: Ask a virtual area of 0x400000000 bytes 00:06:00.417 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:00.417 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:00.417 EAL: Ask a virtual area of 0x61000 bytes 00:06:00.417 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:00.417 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:00.417 EAL: Ask a virtual area of 0x400000000 bytes 00:06:00.417 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:00.417 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:00.417 EAL: Ask a virtual area of 0x61000 bytes 00:06:00.417 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:00.417 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:00.417 EAL: Ask a virtual area of 0x400000000 bytes 00:06:00.417 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:00.417 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:00.417 EAL: Ask a virtual area of 0x61000 bytes 00:06:00.417 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:00.417 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:00.417 EAL: Ask a virtual area of 0x400000000 bytes 00:06:00.417 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:00.417 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:00.417 EAL: Hugepages will be freed exactly as allocated. 00:06:00.417 EAL: No shared files mode enabled, IPC is disabled 00:06:00.417 EAL: No shared files mode enabled, IPC is disabled 00:06:00.417 EAL: TSC frequency is ~2100000 KHz 00:06:00.417 EAL: Main lcore 0 is ready (tid=7fac784b2a00;cpuset=[0]) 00:06:00.417 EAL: Trying to obtain current memory policy. 00:06:00.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.417 EAL: Restoring previous memory policy: 0 00:06:00.417 EAL: request: mp_malloc_sync 00:06:00.417 EAL: No shared files mode enabled, IPC is disabled 00:06:00.417 EAL: Heap on socket 0 was expanded by 2MB 00:06:00.417 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:00.417 EAL: No shared files mode enabled, IPC is disabled 00:06:00.417 EAL: Mem event callback 'spdk:(nil)' registered 00:06:00.417 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:00.417 00:06:00.417 00:06:00.417 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.417 http://cunit.sourceforge.net/ 00:06:00.417 00:06:00.417 00:06:00.417 Suite: components_suite 00:06:00.417 Test: vtophys_malloc_test ...passed 00:06:00.417 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:00.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.417 EAL: Restoring previous memory policy: 4 00:06:00.417 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.417 EAL: request: mp_malloc_sync 00:06:00.417 EAL: No shared files mode enabled, IPC is disabled 00:06:00.417 EAL: Heap on socket 0 was expanded by 4MB 00:06:00.417 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.417 EAL: request: mp_malloc_sync 00:06:00.417 EAL: No shared files mode enabled, IPC is disabled 00:06:00.417 EAL: Heap on socket 0 was shrunk by 4MB 00:06:00.417 EAL: Trying to obtain current memory policy. 00:06:00.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.417 EAL: Restoring previous memory policy: 4 00:06:00.417 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.417 EAL: request: mp_malloc_sync 00:06:00.417 EAL: No shared files mode enabled, IPC is disabled 00:06:00.417 EAL: Heap on socket 0 was expanded by 6MB 00:06:00.417 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.417 EAL: request: mp_malloc_sync 00:06:00.417 EAL: No shared files mode enabled, IPC is disabled 00:06:00.417 EAL: Heap on socket 0 was shrunk by 6MB 00:06:00.417 EAL: Trying to obtain current memory policy. 00:06:00.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.417 EAL: Restoring previous memory policy: 4 00:06:00.417 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.417 EAL: request: mp_malloc_sync 00:06:00.417 EAL: No shared files mode enabled, IPC is disabled 00:06:00.417 EAL: Heap on socket 0 was expanded by 10MB 00:06:00.417 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.417 EAL: request: mp_malloc_sync 00:06:00.417 EAL: No shared files mode enabled, IPC is disabled 00:06:00.417 EAL: Heap on socket 0 was shrunk by 10MB 00:06:00.417 EAL: Trying to obtain current memory policy. 00:06:00.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.417 EAL: Restoring previous memory policy: 4 00:06:00.417 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.417 EAL: request: mp_malloc_sync 00:06:00.417 EAL: No shared files mode enabled, IPC is disabled 00:06:00.417 EAL: Heap on socket 0 was expanded by 18MB 00:06:00.417 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.417 EAL: request: mp_malloc_sync 00:06:00.417 EAL: No shared files mode enabled, IPC is disabled 00:06:00.417 EAL: Heap on socket 0 was shrunk by 18MB 00:06:00.417 EAL: Trying to obtain current memory policy. 00:06:00.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.417 EAL: Restoring previous memory policy: 4 00:06:00.417 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.417 EAL: request: mp_malloc_sync 00:06:00.417 EAL: No shared files mode enabled, IPC is disabled 00:06:00.417 EAL: Heap on socket 0 was expanded by 34MB 00:06:00.417 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.676 EAL: request: mp_malloc_sync 00:06:00.676 EAL: No shared files mode enabled, IPC is disabled 00:06:00.676 EAL: Heap on socket 0 was shrunk by 34MB 00:06:00.676 EAL: Trying to obtain current memory policy. 00:06:00.676 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.676 EAL: Restoring previous memory policy: 4 00:06:00.676 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.676 EAL: request: mp_malloc_sync 00:06:00.676 EAL: No shared files mode enabled, IPC is disabled 00:06:00.676 EAL: Heap on socket 0 was expanded by 66MB 00:06:00.676 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.676 EAL: request: mp_malloc_sync 00:06:00.676 EAL: No shared files mode enabled, IPC is disabled 00:06:00.676 EAL: Heap on socket 0 was shrunk by 66MB 00:06:00.676 EAL: Trying to obtain current memory policy. 00:06:00.676 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.676 EAL: Restoring previous memory policy: 4 00:06:00.676 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.676 EAL: request: mp_malloc_sync 00:06:00.676 EAL: No shared files mode enabled, IPC is disabled 00:06:00.676 EAL: Heap on socket 0 was expanded by 130MB 00:06:00.676 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.676 EAL: request: mp_malloc_sync 00:06:00.676 EAL: No shared files mode enabled, IPC is disabled 00:06:00.676 EAL: Heap on socket 0 was shrunk by 130MB 00:06:00.676 EAL: Trying to obtain current memory policy. 00:06:00.676 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.676 EAL: Restoring previous memory policy: 4 00:06:00.676 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.676 EAL: request: mp_malloc_sync 00:06:00.676 EAL: No shared files mode enabled, IPC is disabled 00:06:00.676 EAL: Heap on socket 0 was expanded by 258MB 00:06:00.676 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.676 EAL: request: mp_malloc_sync 00:06:00.676 EAL: No shared files mode enabled, IPC is disabled 00:06:00.676 EAL: Heap on socket 0 was shrunk by 258MB 00:06:00.676 EAL: Trying to obtain current memory policy. 00:06:00.676 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.934 EAL: Restoring previous memory policy: 4 00:06:00.934 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.934 EAL: request: mp_malloc_sync 00:06:00.934 EAL: No shared files mode enabled, IPC is disabled 00:06:00.934 EAL: Heap on socket 0 was expanded by 514MB 00:06:00.934 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.934 EAL: request: mp_malloc_sync 00:06:00.934 EAL: No shared files mode enabled, IPC is disabled 00:06:00.934 EAL: Heap on socket 0 was shrunk by 514MB 00:06:00.934 EAL: Trying to obtain current memory policy. 00:06:00.934 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.193 EAL: Restoring previous memory policy: 4 00:06:01.193 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.193 EAL: request: mp_malloc_sync 00:06:01.193 EAL: No shared files mode enabled, IPC is disabled 00:06:01.193 EAL: Heap on socket 0 was expanded by 1026MB 00:06:01.450 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.707 passed 00:06:01.707 00:06:01.707 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.707 suites 1 1 n/a 0 0 00:06:01.707 tests 2 2 2 0 0 00:06:01.707 asserts 5218 5218 5218 0 n/a 00:06:01.707 00:06:01.707 Elapsed time = 1.038 seconds 00:06:01.707 EAL: request: mp_malloc_sync 00:06:01.707 EAL: No shared files mode enabled, IPC is disabled 00:06:01.707 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:01.707 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.707 EAL: request: mp_malloc_sync 00:06:01.707 EAL: No shared files mode enabled, IPC is disabled 00:06:01.707 EAL: Heap on socket 0 was shrunk by 2MB 00:06:01.707 EAL: No shared files mode enabled, IPC is disabled 00:06:01.707 EAL: No shared files mode enabled, IPC is disabled 00:06:01.707 EAL: No shared files mode enabled, IPC is disabled 00:06:01.707 00:06:01.707 real 0m1.242s 00:06:01.707 user 0m0.669s 00:06:01.707 sys 0m0.442s 00:06:01.707 22:07:33 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.707 ************************************ 00:06:01.707 END TEST env_vtophys 00:06:01.707 22:07:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:01.707 ************************************ 00:06:01.707 22:07:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:01.707 22:07:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.707 22:07:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.707 22:07:33 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.707 ************************************ 00:06:01.707 START TEST env_pci 00:06:01.707 ************************************ 00:06:01.707 22:07:33 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:01.707 00:06:01.707 00:06:01.707 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.707 http://cunit.sourceforge.net/ 00:06:01.707 00:06:01.707 00:06:01.707 Suite: pci 00:06:01.707 Test: pci_hook ...[2024-07-23 22:07:33.729383] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 72147 has claimed it 00:06:01.707 passed 00:06:01.707 00:06:01.707 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.707 suites 1 1 n/a 0 0 00:06:01.707 tests 1 1 1 0 0 00:06:01.707 asserts 25 25 25 0 n/a 00:06:01.707 00:06:01.707 Elapsed time = 0.002 seconds 00:06:01.707 EAL: Cannot find device (10000:00:01.0) 00:06:01.707 EAL: Failed to attach device on primary process 00:06:01.707 00:06:01.707 real 0m0.025s 00:06:01.707 user 0m0.009s 00:06:01.707 sys 0m0.016s 00:06:01.707 22:07:33 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.707 22:07:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:01.707 ************************************ 00:06:01.707 END TEST env_pci 00:06:01.707 ************************************ 00:06:01.707 22:07:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:01.707 22:07:33 env -- env/env.sh@15 -- # uname 00:06:01.707 22:07:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:01.707 22:07:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:01.707 22:07:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:01.707 22:07:33 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:01.707 22:07:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.707 22:07:33 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.707 ************************************ 00:06:01.707 START TEST env_dpdk_post_init 00:06:01.707 ************************************ 00:06:01.707 22:07:33 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:01.707 EAL: Detected CPU lcores: 10 00:06:01.707 EAL: Detected NUMA nodes: 1 00:06:01.707 EAL: Detected shared linkage of DPDK 00:06:01.707 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:01.707 EAL: Selected IOVA mode 'PA' 00:06:01.966 Starting DPDK initialization... 00:06:01.966 Starting SPDK post initialization... 00:06:01.966 SPDK NVMe probe 00:06:01.966 Attaching to 0000:00:10.0 00:06:01.966 Attaching to 0000:00:11.0 00:06:01.966 Attached to 0000:00:10.0 00:06:01.967 Attached to 0000:00:11.0 00:06:01.967 Cleaning up... 00:06:01.967 00:06:01.967 real 0m0.187s 00:06:01.967 user 0m0.054s 00:06:01.967 sys 0m0.032s 00:06:01.967 22:07:33 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.967 22:07:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.967 ************************************ 00:06:01.967 END TEST env_dpdk_post_init 00:06:01.967 ************************************ 00:06:01.967 22:07:34 env -- env/env.sh@26 -- # uname 00:06:01.967 22:07:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:01.967 22:07:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:01.967 22:07:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.967 22:07:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.967 22:07:34 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.967 ************************************ 00:06:01.967 START TEST env_mem_callbacks 00:06:01.967 ************************************ 00:06:01.967 22:07:34 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:01.967 EAL: Detected CPU lcores: 10 00:06:01.967 EAL: Detected NUMA nodes: 1 00:06:01.967 EAL: Detected shared linkage of DPDK 00:06:01.967 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:01.967 EAL: Selected IOVA mode 'PA' 00:06:02.227 00:06:02.227 00:06:02.227 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.227 http://cunit.sourceforge.net/ 00:06:02.227 00:06:02.227 00:06:02.227 Suite: memory 00:06:02.227 Test: test ... 00:06:02.227 register 0x200000200000 2097152 00:06:02.227 malloc 3145728 00:06:02.227 register 0x200000400000 4194304 00:06:02.227 buf 0x200000500000 len 3145728 PASSED 00:06:02.227 malloc 64 00:06:02.227 buf 0x2000004fff40 len 64 PASSED 00:06:02.227 malloc 4194304 00:06:02.227 register 0x200000800000 6291456 00:06:02.227 buf 0x200000a00000 len 4194304 PASSED 00:06:02.227 free 0x200000500000 3145728 00:06:02.227 free 0x2000004fff40 64 00:06:02.227 unregister 0x200000400000 4194304 PASSED 00:06:02.227 free 0x200000a00000 4194304 00:06:02.227 unregister 0x200000800000 6291456 PASSED 00:06:02.227 malloc 8388608 00:06:02.227 register 0x200000400000 10485760 00:06:02.227 buf 0x200000600000 len 8388608 PASSED 00:06:02.227 free 0x200000600000 8388608 00:06:02.227 unregister 0x200000400000 10485760 PASSED 00:06:02.227 passed 00:06:02.227 00:06:02.227 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.227 suites 1 1 n/a 0 0 00:06:02.227 tests 1 1 1 0 0 00:06:02.227 asserts 15 15 15 0 n/a 00:06:02.227 00:06:02.227 Elapsed time = 0.009 seconds 00:06:02.227 00:06:02.227 real 0m0.151s 00:06:02.227 user 0m0.019s 00:06:02.227 sys 0m0.031s 00:06:02.227 22:07:34 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.227 22:07:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:02.227 ************************************ 00:06:02.227 END TEST env_mem_callbacks 00:06:02.227 ************************************ 00:06:02.227 00:06:02.227 real 0m2.245s 00:06:02.227 user 0m1.105s 00:06:02.227 sys 0m0.811s 00:06:02.227 22:07:34 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.227 22:07:34 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.227 ************************************ 00:06:02.227 END TEST env 00:06:02.227 ************************************ 00:06:02.227 22:07:34 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:02.227 22:07:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.227 22:07:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.227 22:07:34 -- common/autotest_common.sh@10 -- # set +x 00:06:02.227 ************************************ 00:06:02.227 START TEST rpc 00:06:02.227 ************************************ 00:06:02.227 22:07:34 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:02.227 * Looking for test storage... 00:06:02.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:02.227 22:07:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=72262 00:06:02.227 22:07:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.227 22:07:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 72262 00:06:02.227 22:07:34 rpc -- common/autotest_common.sh@829 -- # '[' -z 72262 ']' 00:06:02.227 22:07:34 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.227 22:07:34 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.227 22:07:34 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.227 22:07:34 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.227 22:07:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.227 22:07:34 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:02.486 [2024-07-23 22:07:34.474930] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:02.486 [2024-07-23 22:07:34.475038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72262 ] 00:06:02.487 [2024-07-23 22:07:34.604685] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.487 [2024-07-23 22:07:34.621892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.487 [2024-07-23 22:07:34.671894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:02.487 [2024-07-23 22:07:34.671941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 72262' to capture a snapshot of events at runtime. 00:06:02.487 [2024-07-23 22:07:34.671966] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:02.487 [2024-07-23 22:07:34.671975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:02.487 [2024-07-23 22:07:34.671982] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid72262 for offline analysis/debug. 00:06:02.487 [2024-07-23 22:07:34.672010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.746 [2024-07-23 22:07:34.715456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:03.314 22:07:35 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.314 22:07:35 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:03.314 22:07:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:03.314 22:07:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:03.314 22:07:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:03.314 22:07:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:03.314 22:07:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.314 22:07:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.314 22:07:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.314 ************************************ 00:06:03.314 START TEST rpc_integrity 00:06:03.314 ************************************ 00:06:03.314 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:03.314 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:03.314 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.314 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.314 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.314 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:03.314 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:03.574 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:03.574 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:03.574 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.574 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.574 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.574 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:03.574 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:03.574 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.574 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.574 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.574 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:03.574 { 00:06:03.574 "name": "Malloc0", 00:06:03.574 "aliases": [ 00:06:03.574 "3a190e21-56fc-4d99-9a00-94ba14d17cf4" 00:06:03.574 ], 00:06:03.574 "product_name": "Malloc disk", 00:06:03.574 "block_size": 512, 00:06:03.574 "num_blocks": 16384, 00:06:03.574 "uuid": "3a190e21-56fc-4d99-9a00-94ba14d17cf4", 00:06:03.574 "assigned_rate_limits": { 00:06:03.574 "rw_ios_per_sec": 0, 00:06:03.574 "rw_mbytes_per_sec": 0, 00:06:03.574 "r_mbytes_per_sec": 0, 00:06:03.574 "w_mbytes_per_sec": 0 00:06:03.574 }, 00:06:03.574 "claimed": false, 00:06:03.574 "zoned": false, 00:06:03.574 "supported_io_types": { 00:06:03.574 "read": true, 00:06:03.574 "write": true, 00:06:03.574 "unmap": true, 00:06:03.574 "flush": true, 00:06:03.574 "reset": true, 00:06:03.574 "nvme_admin": false, 00:06:03.574 "nvme_io": false, 00:06:03.574 "nvme_io_md": false, 00:06:03.574 "write_zeroes": true, 00:06:03.574 "zcopy": true, 00:06:03.574 "get_zone_info": false, 00:06:03.574 "zone_management": false, 00:06:03.574 "zone_append": false, 00:06:03.574 "compare": false, 00:06:03.574 "compare_and_write": false, 00:06:03.574 "abort": true, 00:06:03.574 "seek_hole": false, 00:06:03.574 "seek_data": false, 00:06:03.574 "copy": true, 00:06:03.574 "nvme_iov_md": false 00:06:03.574 }, 00:06:03.574 "memory_domains": [ 00:06:03.574 { 00:06:03.574 "dma_device_id": "system", 00:06:03.574 "dma_device_type": 1 00:06:03.574 }, 00:06:03.574 { 00:06:03.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.574 "dma_device_type": 2 00:06:03.574 } 00:06:03.574 ], 00:06:03.574 "driver_specific": {} 00:06:03.574 } 00:06:03.574 ]' 00:06:03.574 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:03.574 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:03.574 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:03.574 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.574 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.574 [2024-07-23 22:07:35.610775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:03.574 [2024-07-23 22:07:35.610829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:03.574 [2024-07-23 22:07:35.610845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a28a80 00:06:03.574 [2024-07-23 22:07:35.610853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:03.574 [2024-07-23 22:07:35.612242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:03.574 [2024-07-23 22:07:35.612280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:03.574 Passthru0 00:06:03.574 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.574 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:03.574 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.574 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.574 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.574 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:03.574 { 00:06:03.574 "name": "Malloc0", 00:06:03.574 "aliases": [ 00:06:03.574 "3a190e21-56fc-4d99-9a00-94ba14d17cf4" 00:06:03.574 ], 00:06:03.574 "product_name": "Malloc disk", 00:06:03.574 "block_size": 512, 00:06:03.574 "num_blocks": 16384, 00:06:03.574 "uuid": "3a190e21-56fc-4d99-9a00-94ba14d17cf4", 00:06:03.574 "assigned_rate_limits": { 00:06:03.574 "rw_ios_per_sec": 0, 00:06:03.574 "rw_mbytes_per_sec": 0, 00:06:03.574 "r_mbytes_per_sec": 0, 00:06:03.574 "w_mbytes_per_sec": 0 00:06:03.574 }, 00:06:03.574 "claimed": true, 00:06:03.574 "claim_type": "exclusive_write", 00:06:03.574 "zoned": false, 00:06:03.574 "supported_io_types": { 00:06:03.574 "read": true, 00:06:03.574 "write": true, 00:06:03.574 "unmap": true, 00:06:03.574 "flush": true, 00:06:03.574 "reset": true, 00:06:03.574 "nvme_admin": false, 00:06:03.574 "nvme_io": false, 00:06:03.574 "nvme_io_md": false, 00:06:03.574 "write_zeroes": true, 00:06:03.574 "zcopy": true, 00:06:03.574 "get_zone_info": false, 00:06:03.574 "zone_management": false, 00:06:03.574 "zone_append": false, 00:06:03.574 "compare": false, 00:06:03.574 "compare_and_write": false, 00:06:03.575 "abort": true, 00:06:03.575 "seek_hole": false, 00:06:03.575 "seek_data": false, 00:06:03.575 "copy": true, 00:06:03.575 "nvme_iov_md": false 00:06:03.575 }, 00:06:03.575 "memory_domains": [ 00:06:03.575 { 00:06:03.575 "dma_device_id": "system", 00:06:03.575 "dma_device_type": 1 00:06:03.575 }, 00:06:03.575 { 00:06:03.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.575 "dma_device_type": 2 00:06:03.575 } 00:06:03.575 ], 00:06:03.575 "driver_specific": {} 00:06:03.575 }, 00:06:03.575 { 00:06:03.575 "name": "Passthru0", 00:06:03.575 "aliases": [ 00:06:03.575 "17ab0a6e-4301-53e6-9c6d-32398b1b7a80" 00:06:03.575 ], 00:06:03.575 "product_name": "passthru", 00:06:03.575 "block_size": 512, 00:06:03.575 "num_blocks": 16384, 00:06:03.575 "uuid": "17ab0a6e-4301-53e6-9c6d-32398b1b7a80", 00:06:03.575 "assigned_rate_limits": { 00:06:03.575 "rw_ios_per_sec": 0, 00:06:03.575 "rw_mbytes_per_sec": 0, 00:06:03.575 "r_mbytes_per_sec": 0, 00:06:03.575 "w_mbytes_per_sec": 0 00:06:03.575 }, 00:06:03.575 "claimed": false, 00:06:03.575 "zoned": false, 00:06:03.575 "supported_io_types": { 00:06:03.575 "read": true, 00:06:03.575 "write": true, 00:06:03.575 "unmap": true, 00:06:03.575 "flush": true, 00:06:03.575 "reset": true, 00:06:03.575 "nvme_admin": false, 00:06:03.575 "nvme_io": false, 00:06:03.575 "nvme_io_md": false, 00:06:03.575 "write_zeroes": true, 00:06:03.575 "zcopy": true, 00:06:03.575 "get_zone_info": false, 00:06:03.575 "zone_management": false, 00:06:03.575 "zone_append": false, 00:06:03.575 "compare": false, 00:06:03.575 "compare_and_write": false, 00:06:03.575 "abort": true, 00:06:03.575 "seek_hole": false, 00:06:03.575 "seek_data": false, 00:06:03.575 "copy": true, 00:06:03.575 "nvme_iov_md": false 00:06:03.575 }, 00:06:03.575 "memory_domains": [ 00:06:03.575 { 00:06:03.575 "dma_device_id": "system", 00:06:03.575 "dma_device_type": 1 00:06:03.575 }, 00:06:03.575 { 00:06:03.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.575 "dma_device_type": 2 00:06:03.575 } 00:06:03.575 ], 00:06:03.575 "driver_specific": { 00:06:03.575 "passthru": { 00:06:03.575 "name": "Passthru0", 00:06:03.575 "base_bdev_name": "Malloc0" 00:06:03.575 } 00:06:03.575 } 00:06:03.575 } 00:06:03.575 ]' 00:06:03.575 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:03.575 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:03.575 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:03.575 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.575 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.575 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.575 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:03.575 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.575 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.575 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.575 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:03.575 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.575 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.575 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.575 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:03.575 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:03.575 ************************************ 00:06:03.575 END TEST rpc_integrity 00:06:03.575 ************************************ 00:06:03.575 22:07:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:03.575 00:06:03.575 real 0m0.308s 00:06:03.575 user 0m0.184s 00:06:03.575 sys 0m0.057s 00:06:03.575 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.575 22:07:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.834 22:07:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:03.834 22:07:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.834 22:07:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.834 22:07:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.834 ************************************ 00:06:03.834 START TEST rpc_plugins 00:06:03.834 ************************************ 00:06:03.834 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:03.834 22:07:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:03.834 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.835 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:03.835 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.835 22:07:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:03.835 22:07:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:03.835 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.835 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:03.835 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.835 22:07:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:03.835 { 00:06:03.835 "name": "Malloc1", 00:06:03.835 "aliases": [ 00:06:03.835 "d8847da8-5bc2-4cdb-8cb0-9e1dd9c6d98e" 00:06:03.835 ], 00:06:03.835 "product_name": "Malloc disk", 00:06:03.835 "block_size": 4096, 00:06:03.835 "num_blocks": 256, 00:06:03.835 "uuid": "d8847da8-5bc2-4cdb-8cb0-9e1dd9c6d98e", 00:06:03.835 "assigned_rate_limits": { 00:06:03.835 "rw_ios_per_sec": 0, 00:06:03.835 "rw_mbytes_per_sec": 0, 00:06:03.835 "r_mbytes_per_sec": 0, 00:06:03.835 "w_mbytes_per_sec": 0 00:06:03.835 }, 00:06:03.835 "claimed": false, 00:06:03.835 "zoned": false, 00:06:03.835 "supported_io_types": { 00:06:03.835 "read": true, 00:06:03.835 "write": true, 00:06:03.835 "unmap": true, 00:06:03.835 "flush": true, 00:06:03.835 "reset": true, 00:06:03.835 "nvme_admin": false, 00:06:03.835 "nvme_io": false, 00:06:03.835 "nvme_io_md": false, 00:06:03.835 "write_zeroes": true, 00:06:03.835 "zcopy": true, 00:06:03.835 "get_zone_info": false, 00:06:03.835 "zone_management": false, 00:06:03.835 "zone_append": false, 00:06:03.835 "compare": false, 00:06:03.835 "compare_and_write": false, 00:06:03.835 "abort": true, 00:06:03.835 "seek_hole": false, 00:06:03.835 "seek_data": false, 00:06:03.835 "copy": true, 00:06:03.835 "nvme_iov_md": false 00:06:03.835 }, 00:06:03.835 "memory_domains": [ 00:06:03.835 { 00:06:03.835 "dma_device_id": "system", 00:06:03.835 "dma_device_type": 1 00:06:03.835 }, 00:06:03.835 { 00:06:03.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.835 "dma_device_type": 2 00:06:03.835 } 00:06:03.835 ], 00:06:03.835 "driver_specific": {} 00:06:03.835 } 00:06:03.835 ]' 00:06:03.835 22:07:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:03.835 22:07:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:03.835 22:07:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:03.835 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.835 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:03.835 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.835 22:07:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:03.835 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.835 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:03.835 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.835 22:07:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:03.835 22:07:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:03.835 ************************************ 00:06:03.835 END TEST rpc_plugins 00:06:03.835 ************************************ 00:06:03.835 22:07:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:03.835 00:06:03.835 real 0m0.154s 00:06:03.835 user 0m0.093s 00:06:03.835 sys 0m0.025s 00:06:03.835 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.835 22:07:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:03.835 22:07:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:03.835 22:07:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.835 22:07:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.835 22:07:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.094 ************************************ 00:06:04.094 START TEST rpc_trace_cmd_test 00:06:04.094 ************************************ 00:06:04.094 22:07:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:04.094 22:07:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:04.094 22:07:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:04.095 22:07:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.095 22:07:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.095 22:07:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.095 22:07:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:04.095 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid72262", 00:06:04.095 "tpoint_group_mask": "0x8", 00:06:04.095 "iscsi_conn": { 00:06:04.095 "mask": "0x2", 00:06:04.095 "tpoint_mask": "0x0" 00:06:04.095 }, 00:06:04.095 "scsi": { 00:06:04.095 "mask": "0x4", 00:06:04.095 "tpoint_mask": "0x0" 00:06:04.095 }, 00:06:04.095 "bdev": { 00:06:04.095 "mask": "0x8", 00:06:04.095 "tpoint_mask": "0xffffffffffffffff" 00:06:04.095 }, 00:06:04.095 "nvmf_rdma": { 00:06:04.095 "mask": "0x10", 00:06:04.095 "tpoint_mask": "0x0" 00:06:04.095 }, 00:06:04.095 "nvmf_tcp": { 00:06:04.095 "mask": "0x20", 00:06:04.095 "tpoint_mask": "0x0" 00:06:04.095 }, 00:06:04.095 "ftl": { 00:06:04.095 "mask": "0x40", 00:06:04.095 "tpoint_mask": "0x0" 00:06:04.095 }, 00:06:04.095 "blobfs": { 00:06:04.095 "mask": "0x80", 00:06:04.095 "tpoint_mask": "0x0" 00:06:04.095 }, 00:06:04.095 "dsa": { 00:06:04.095 "mask": "0x200", 00:06:04.095 "tpoint_mask": "0x0" 00:06:04.095 }, 00:06:04.095 "thread": { 00:06:04.095 "mask": "0x400", 00:06:04.095 "tpoint_mask": "0x0" 00:06:04.095 }, 00:06:04.095 "nvme_pcie": { 00:06:04.095 "mask": "0x800", 00:06:04.095 "tpoint_mask": "0x0" 00:06:04.095 }, 00:06:04.095 "iaa": { 00:06:04.095 "mask": "0x1000", 00:06:04.095 "tpoint_mask": "0x0" 00:06:04.095 }, 00:06:04.095 "nvme_tcp": { 00:06:04.095 "mask": "0x2000", 00:06:04.095 "tpoint_mask": "0x0" 00:06:04.095 }, 00:06:04.095 "bdev_nvme": { 00:06:04.095 "mask": "0x4000", 00:06:04.095 "tpoint_mask": "0x0" 00:06:04.095 }, 00:06:04.095 "sock": { 00:06:04.095 "mask": "0x8000", 00:06:04.095 "tpoint_mask": "0x0" 00:06:04.095 } 00:06:04.095 }' 00:06:04.095 22:07:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:04.095 22:07:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:04.095 22:07:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:04.095 22:07:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:04.095 22:07:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:04.095 22:07:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:04.095 22:07:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:04.095 22:07:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:04.095 22:07:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:04.355 ************************************ 00:06:04.355 END TEST rpc_trace_cmd_test 00:06:04.355 ************************************ 00:06:04.355 22:07:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:04.355 00:06:04.355 real 0m0.251s 00:06:04.355 user 0m0.198s 00:06:04.355 sys 0m0.042s 00:06:04.355 22:07:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.355 22:07:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:04.355 22:07:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:04.355 22:07:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:04.355 22:07:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:04.355 22:07:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.355 22:07:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.355 22:07:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.355 ************************************ 00:06:04.355 START TEST rpc_daemon_integrity 00:06:04.355 ************************************ 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:04.355 { 00:06:04.355 "name": "Malloc2", 00:06:04.355 "aliases": [ 00:06:04.355 "31490099-bc93-468e-91f3-146545844fd8" 00:06:04.355 ], 00:06:04.355 "product_name": "Malloc disk", 00:06:04.355 "block_size": 512, 00:06:04.355 "num_blocks": 16384, 00:06:04.355 "uuid": "31490099-bc93-468e-91f3-146545844fd8", 00:06:04.355 "assigned_rate_limits": { 00:06:04.355 "rw_ios_per_sec": 0, 00:06:04.355 "rw_mbytes_per_sec": 0, 00:06:04.355 "r_mbytes_per_sec": 0, 00:06:04.355 "w_mbytes_per_sec": 0 00:06:04.355 }, 00:06:04.355 "claimed": false, 00:06:04.355 "zoned": false, 00:06:04.355 "supported_io_types": { 00:06:04.355 "read": true, 00:06:04.355 "write": true, 00:06:04.355 "unmap": true, 00:06:04.355 "flush": true, 00:06:04.355 "reset": true, 00:06:04.355 "nvme_admin": false, 00:06:04.355 "nvme_io": false, 00:06:04.355 "nvme_io_md": false, 00:06:04.355 "write_zeroes": true, 00:06:04.355 "zcopy": true, 00:06:04.355 "get_zone_info": false, 00:06:04.355 "zone_management": false, 00:06:04.355 "zone_append": false, 00:06:04.355 "compare": false, 00:06:04.355 "compare_and_write": false, 00:06:04.355 "abort": true, 00:06:04.355 "seek_hole": false, 00:06:04.355 "seek_data": false, 00:06:04.355 "copy": true, 00:06:04.355 "nvme_iov_md": false 00:06:04.355 }, 00:06:04.355 "memory_domains": [ 00:06:04.355 { 00:06:04.355 "dma_device_id": "system", 00:06:04.355 "dma_device_type": 1 00:06:04.355 }, 00:06:04.355 { 00:06:04.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.355 "dma_device_type": 2 00:06:04.355 } 00:06:04.355 ], 00:06:04.355 "driver_specific": {} 00:06:04.355 } 00:06:04.355 ]' 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.355 [2024-07-23 22:07:36.501650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:04.355 [2024-07-23 22:07:36.501773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:04.355 [2024-07-23 22:07:36.501815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a14590 00:06:04.355 [2024-07-23 22:07:36.501838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:04.355 [2024-07-23 22:07:36.504576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:04.355 [2024-07-23 22:07:36.504647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:04.355 Passthru0 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.355 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:04.355 { 00:06:04.355 "name": "Malloc2", 00:06:04.355 "aliases": [ 00:06:04.355 "31490099-bc93-468e-91f3-146545844fd8" 00:06:04.355 ], 00:06:04.355 "product_name": "Malloc disk", 00:06:04.355 "block_size": 512, 00:06:04.355 "num_blocks": 16384, 00:06:04.355 "uuid": "31490099-bc93-468e-91f3-146545844fd8", 00:06:04.355 "assigned_rate_limits": { 00:06:04.355 "rw_ios_per_sec": 0, 00:06:04.355 "rw_mbytes_per_sec": 0, 00:06:04.355 "r_mbytes_per_sec": 0, 00:06:04.355 "w_mbytes_per_sec": 0 00:06:04.355 }, 00:06:04.355 "claimed": true, 00:06:04.355 "claim_type": "exclusive_write", 00:06:04.355 "zoned": false, 00:06:04.355 "supported_io_types": { 00:06:04.355 "read": true, 00:06:04.355 "write": true, 00:06:04.355 "unmap": true, 00:06:04.355 "flush": true, 00:06:04.355 "reset": true, 00:06:04.355 "nvme_admin": false, 00:06:04.355 "nvme_io": false, 00:06:04.355 "nvme_io_md": false, 00:06:04.355 "write_zeroes": true, 00:06:04.355 "zcopy": true, 00:06:04.355 "get_zone_info": false, 00:06:04.355 "zone_management": false, 00:06:04.355 "zone_append": false, 00:06:04.355 "compare": false, 00:06:04.355 "compare_and_write": false, 00:06:04.355 "abort": true, 00:06:04.355 "seek_hole": false, 00:06:04.355 "seek_data": false, 00:06:04.355 "copy": true, 00:06:04.355 "nvme_iov_md": false 00:06:04.355 }, 00:06:04.355 "memory_domains": [ 00:06:04.355 { 00:06:04.355 "dma_device_id": "system", 00:06:04.355 "dma_device_type": 1 00:06:04.355 }, 00:06:04.355 { 00:06:04.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.356 "dma_device_type": 2 00:06:04.356 } 00:06:04.356 ], 00:06:04.356 "driver_specific": {} 00:06:04.356 }, 00:06:04.356 { 00:06:04.356 "name": "Passthru0", 00:06:04.356 "aliases": [ 00:06:04.356 "7aa032eb-da1a-5280-a006-d540a9d310ee" 00:06:04.356 ], 00:06:04.356 "product_name": "passthru", 00:06:04.356 "block_size": 512, 00:06:04.356 "num_blocks": 16384, 00:06:04.356 "uuid": "7aa032eb-da1a-5280-a006-d540a9d310ee", 00:06:04.356 "assigned_rate_limits": { 00:06:04.356 "rw_ios_per_sec": 0, 00:06:04.356 "rw_mbytes_per_sec": 0, 00:06:04.356 "r_mbytes_per_sec": 0, 00:06:04.356 "w_mbytes_per_sec": 0 00:06:04.356 }, 00:06:04.356 "claimed": false, 00:06:04.356 "zoned": false, 00:06:04.356 "supported_io_types": { 00:06:04.356 "read": true, 00:06:04.356 "write": true, 00:06:04.356 "unmap": true, 00:06:04.356 "flush": true, 00:06:04.356 "reset": true, 00:06:04.356 "nvme_admin": false, 00:06:04.356 "nvme_io": false, 00:06:04.356 "nvme_io_md": false, 00:06:04.356 "write_zeroes": true, 00:06:04.356 "zcopy": true, 00:06:04.356 "get_zone_info": false, 00:06:04.356 "zone_management": false, 00:06:04.356 "zone_append": false, 00:06:04.356 "compare": false, 00:06:04.356 "compare_and_write": false, 00:06:04.356 "abort": true, 00:06:04.356 "seek_hole": false, 00:06:04.356 "seek_data": false, 00:06:04.356 "copy": true, 00:06:04.356 "nvme_iov_md": false 00:06:04.356 }, 00:06:04.356 "memory_domains": [ 00:06:04.356 { 00:06:04.356 "dma_device_id": "system", 00:06:04.356 "dma_device_type": 1 00:06:04.356 }, 00:06:04.356 { 00:06:04.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.356 "dma_device_type": 2 00:06:04.356 } 00:06:04.356 ], 00:06:04.356 "driver_specific": { 00:06:04.356 "passthru": { 00:06:04.356 "name": "Passthru0", 00:06:04.356 "base_bdev_name": "Malloc2" 00:06:04.356 } 00:06:04.356 } 00:06:04.356 } 00:06:04.356 ]' 00:06:04.356 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:04.615 ************************************ 00:06:04.615 END TEST rpc_daemon_integrity 00:06:04.615 ************************************ 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:04.615 00:06:04.615 real 0m0.315s 00:06:04.615 user 0m0.199s 00:06:04.615 sys 0m0.049s 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.615 22:07:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.615 22:07:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:04.615 22:07:36 rpc -- rpc/rpc.sh@84 -- # killprocess 72262 00:06:04.615 22:07:36 rpc -- common/autotest_common.sh@948 -- # '[' -z 72262 ']' 00:06:04.615 22:07:36 rpc -- common/autotest_common.sh@952 -- # kill -0 72262 00:06:04.615 22:07:36 rpc -- common/autotest_common.sh@953 -- # uname 00:06:04.615 22:07:36 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.615 22:07:36 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72262 00:06:04.615 killing process with pid 72262 00:06:04.615 22:07:36 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.615 22:07:36 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.615 22:07:36 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72262' 00:06:04.615 22:07:36 rpc -- common/autotest_common.sh@967 -- # kill 72262 00:06:04.615 22:07:36 rpc -- common/autotest_common.sh@972 -- # wait 72262 00:06:05.184 00:06:05.184 real 0m3.015s 00:06:05.184 user 0m3.758s 00:06:05.184 sys 0m0.767s 00:06:05.184 22:07:37 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.184 22:07:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.184 ************************************ 00:06:05.184 END TEST rpc 00:06:05.184 ************************************ 00:06:05.184 22:07:37 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:05.184 22:07:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.184 22:07:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.184 22:07:37 -- common/autotest_common.sh@10 -- # set +x 00:06:05.455 ************************************ 00:06:05.455 START TEST skip_rpc 00:06:05.455 ************************************ 00:06:05.455 22:07:37 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:05.455 * Looking for test storage... 00:06:05.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:05.455 22:07:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:05.455 22:07:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:05.455 22:07:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:05.455 22:07:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.455 22:07:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.455 22:07:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.455 ************************************ 00:06:05.455 START TEST skip_rpc 00:06:05.455 ************************************ 00:06:05.455 22:07:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:05.455 22:07:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=72460 00:06:05.455 22:07:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.455 22:07:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:05.455 22:07:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:05.455 [2024-07-23 22:07:37.551313] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:05.455 [2024-07-23 22:07:37.551419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72460 ] 00:06:05.749 [2024-07-23 22:07:37.679173] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.749 [2024-07-23 22:07:37.694799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.749 [2024-07-23 22:07:37.769000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.749 [2024-07-23 22:07:37.845495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 72460 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 72460 ']' 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 72460 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72460 00:06:11.020 killing process with pid 72460 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72460' 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 72460 00:06:11.020 22:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 72460 00:06:11.020 00:06:11.020 real 0m5.600s 00:06:11.020 user 0m5.108s 00:06:11.020 sys 0m0.404s 00:06:11.020 22:07:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.020 ************************************ 00:06:11.020 END TEST skip_rpc 00:06:11.020 ************************************ 00:06:11.020 22:07:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.020 22:07:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:11.020 22:07:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.020 22:07:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.020 22:07:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.020 ************************************ 00:06:11.020 START TEST skip_rpc_with_json 00:06:11.020 ************************************ 00:06:11.020 22:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:11.020 22:07:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:11.020 22:07:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=72541 00:06:11.020 22:07:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.020 22:07:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.020 22:07:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 72541 00:06:11.020 22:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 72541 ']' 00:06:11.021 22:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.021 22:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.021 22:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.021 22:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.021 22:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:11.283 [2024-07-23 22:07:43.216491] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:11.283 [2024-07-23 22:07:43.216598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72541 ] 00:06:11.283 [2024-07-23 22:07:43.343753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.283 [2024-07-23 22:07:43.360658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.283 [2024-07-23 22:07:43.442198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.541 [2024-07-23 22:07:43.520244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.108 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.108 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:12.108 22:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:12.108 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.108 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.108 [2024-07-23 22:07:44.187842] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:12.108 request: 00:06:12.108 { 00:06:12.108 "trtype": "tcp", 00:06:12.108 "method": "nvmf_get_transports", 00:06:12.108 "req_id": 1 00:06:12.108 } 00:06:12.108 Got JSON-RPC error response 00:06:12.108 response: 00:06:12.108 { 00:06:12.108 "code": -19, 00:06:12.108 "message": "No such device" 00:06:12.108 } 00:06:12.108 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:12.108 22:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:12.108 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.108 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.108 [2024-07-23 22:07:44.199951] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.108 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.108 22:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:12.108 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.108 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.367 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.367 22:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:12.367 { 00:06:12.367 "subsystems": [ 00:06:12.367 { 00:06:12.367 "subsystem": "keyring", 00:06:12.367 "config": [] 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "subsystem": "iobuf", 00:06:12.367 "config": [ 00:06:12.367 { 00:06:12.367 "method": "iobuf_set_options", 00:06:12.367 "params": { 00:06:12.367 "small_pool_count": 8192, 00:06:12.367 "large_pool_count": 1024, 00:06:12.367 "small_bufsize": 8192, 00:06:12.367 "large_bufsize": 135168 00:06:12.367 } 00:06:12.367 } 00:06:12.367 ] 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "subsystem": "sock", 00:06:12.367 "config": [ 00:06:12.367 { 00:06:12.367 "method": "sock_set_default_impl", 00:06:12.367 "params": { 00:06:12.367 "impl_name": "uring" 00:06:12.367 } 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "method": "sock_impl_set_options", 00:06:12.367 "params": { 00:06:12.367 "impl_name": "ssl", 00:06:12.367 "recv_buf_size": 4096, 00:06:12.367 "send_buf_size": 4096, 00:06:12.367 "enable_recv_pipe": true, 00:06:12.367 "enable_quickack": false, 00:06:12.367 "enable_placement_id": 0, 00:06:12.367 "enable_zerocopy_send_server": true, 00:06:12.367 "enable_zerocopy_send_client": false, 00:06:12.367 "zerocopy_threshold": 0, 00:06:12.367 "tls_version": 0, 00:06:12.367 "enable_ktls": false 00:06:12.367 } 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "method": "sock_impl_set_options", 00:06:12.367 "params": { 00:06:12.367 "impl_name": "posix", 00:06:12.367 "recv_buf_size": 2097152, 00:06:12.367 "send_buf_size": 2097152, 00:06:12.367 "enable_recv_pipe": true, 00:06:12.367 "enable_quickack": false, 00:06:12.367 "enable_placement_id": 0, 00:06:12.367 "enable_zerocopy_send_server": true, 00:06:12.367 "enable_zerocopy_send_client": false, 00:06:12.367 "zerocopy_threshold": 0, 00:06:12.367 "tls_version": 0, 00:06:12.367 "enable_ktls": false 00:06:12.367 } 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "method": "sock_impl_set_options", 00:06:12.367 "params": { 00:06:12.367 "impl_name": "uring", 00:06:12.367 "recv_buf_size": 2097152, 00:06:12.367 "send_buf_size": 2097152, 00:06:12.367 "enable_recv_pipe": true, 00:06:12.367 "enable_quickack": false, 00:06:12.367 "enable_placement_id": 0, 00:06:12.367 "enable_zerocopy_send_server": false, 00:06:12.367 "enable_zerocopy_send_client": false, 00:06:12.367 "zerocopy_threshold": 0, 00:06:12.367 "tls_version": 0, 00:06:12.367 "enable_ktls": false 00:06:12.367 } 00:06:12.367 } 00:06:12.367 ] 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "subsystem": "vmd", 00:06:12.367 "config": [] 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "subsystem": "accel", 00:06:12.367 "config": [ 00:06:12.367 { 00:06:12.367 "method": "accel_set_options", 00:06:12.367 "params": { 00:06:12.367 "small_cache_size": 128, 00:06:12.367 "large_cache_size": 16, 00:06:12.367 "task_count": 2048, 00:06:12.367 "sequence_count": 2048, 00:06:12.367 "buf_count": 2048 00:06:12.367 } 00:06:12.367 } 00:06:12.367 ] 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "subsystem": "bdev", 00:06:12.367 "config": [ 00:06:12.367 { 00:06:12.367 "method": "bdev_set_options", 00:06:12.367 "params": { 00:06:12.367 "bdev_io_pool_size": 65535, 00:06:12.367 "bdev_io_cache_size": 256, 00:06:12.367 "bdev_auto_examine": true, 00:06:12.367 "iobuf_small_cache_size": 128, 00:06:12.367 "iobuf_large_cache_size": 16 00:06:12.367 } 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "method": "bdev_raid_set_options", 00:06:12.367 "params": { 00:06:12.367 "process_window_size_kb": 1024, 00:06:12.367 "process_max_bandwidth_mb_sec": 0 00:06:12.367 } 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "method": "bdev_iscsi_set_options", 00:06:12.367 "params": { 00:06:12.367 "timeout_sec": 30 00:06:12.367 } 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "method": "bdev_nvme_set_options", 00:06:12.367 "params": { 00:06:12.367 "action_on_timeout": "none", 00:06:12.367 "timeout_us": 0, 00:06:12.367 "timeout_admin_us": 0, 00:06:12.367 "keep_alive_timeout_ms": 10000, 00:06:12.367 "arbitration_burst": 0, 00:06:12.367 "low_priority_weight": 0, 00:06:12.367 "medium_priority_weight": 0, 00:06:12.367 "high_priority_weight": 0, 00:06:12.367 "nvme_adminq_poll_period_us": 10000, 00:06:12.367 "nvme_ioq_poll_period_us": 0, 00:06:12.367 "io_queue_requests": 0, 00:06:12.367 "delay_cmd_submit": true, 00:06:12.367 "transport_retry_count": 4, 00:06:12.367 "bdev_retry_count": 3, 00:06:12.367 "transport_ack_timeout": 0, 00:06:12.367 "ctrlr_loss_timeout_sec": 0, 00:06:12.367 "reconnect_delay_sec": 0, 00:06:12.367 "fast_io_fail_timeout_sec": 0, 00:06:12.367 "disable_auto_failback": false, 00:06:12.367 "generate_uuids": false, 00:06:12.367 "transport_tos": 0, 00:06:12.367 "nvme_error_stat": false, 00:06:12.367 "rdma_srq_size": 0, 00:06:12.367 "io_path_stat": false, 00:06:12.367 "allow_accel_sequence": false, 00:06:12.367 "rdma_max_cq_size": 0, 00:06:12.367 "rdma_cm_event_timeout_ms": 0, 00:06:12.367 "dhchap_digests": [ 00:06:12.367 "sha256", 00:06:12.367 "sha384", 00:06:12.367 "sha512" 00:06:12.367 ], 00:06:12.367 "dhchap_dhgroups": [ 00:06:12.367 "null", 00:06:12.367 "ffdhe2048", 00:06:12.367 "ffdhe3072", 00:06:12.367 "ffdhe4096", 00:06:12.367 "ffdhe6144", 00:06:12.367 "ffdhe8192" 00:06:12.367 ] 00:06:12.367 } 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "method": "bdev_nvme_set_hotplug", 00:06:12.367 "params": { 00:06:12.367 "period_us": 100000, 00:06:12.367 "enable": false 00:06:12.367 } 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "method": "bdev_wait_for_examine" 00:06:12.367 } 00:06:12.367 ] 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "subsystem": "scsi", 00:06:12.367 "config": null 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "subsystem": "scheduler", 00:06:12.367 "config": [ 00:06:12.367 { 00:06:12.367 "method": "framework_set_scheduler", 00:06:12.367 "params": { 00:06:12.367 "name": "static" 00:06:12.367 } 00:06:12.367 } 00:06:12.367 ] 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "subsystem": "vhost_scsi", 00:06:12.367 "config": [] 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "subsystem": "vhost_blk", 00:06:12.367 "config": [] 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "subsystem": "ublk", 00:06:12.367 "config": [] 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "subsystem": "nbd", 00:06:12.367 "config": [] 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "subsystem": "nvmf", 00:06:12.367 "config": [ 00:06:12.367 { 00:06:12.367 "method": "nvmf_set_config", 00:06:12.367 "params": { 00:06:12.367 "discovery_filter": "match_any", 00:06:12.367 "admin_cmd_passthru": { 00:06:12.367 "identify_ctrlr": false 00:06:12.367 } 00:06:12.367 } 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "method": "nvmf_set_max_subsystems", 00:06:12.367 "params": { 00:06:12.367 "max_subsystems": 1024 00:06:12.367 } 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "method": "nvmf_set_crdt", 00:06:12.367 "params": { 00:06:12.367 "crdt1": 0, 00:06:12.367 "crdt2": 0, 00:06:12.367 "crdt3": 0 00:06:12.367 } 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "method": "nvmf_create_transport", 00:06:12.367 "params": { 00:06:12.367 "trtype": "TCP", 00:06:12.367 "max_queue_depth": 128, 00:06:12.367 "max_io_qpairs_per_ctrlr": 127, 00:06:12.367 "in_capsule_data_size": 4096, 00:06:12.367 "max_io_size": 131072, 00:06:12.367 "io_unit_size": 131072, 00:06:12.367 "max_aq_depth": 128, 00:06:12.367 "num_shared_buffers": 511, 00:06:12.367 "buf_cache_size": 4294967295, 00:06:12.367 "dif_insert_or_strip": false, 00:06:12.367 "zcopy": false, 00:06:12.367 "c2h_success": true, 00:06:12.367 "sock_priority": 0, 00:06:12.367 "abort_timeout_sec": 1, 00:06:12.367 "ack_timeout": 0, 00:06:12.367 "data_wr_pool_size": 0 00:06:12.367 } 00:06:12.367 } 00:06:12.367 ] 00:06:12.367 }, 00:06:12.367 { 00:06:12.367 "subsystem": "iscsi", 00:06:12.367 "config": [ 00:06:12.367 { 00:06:12.367 "method": "iscsi_set_options", 00:06:12.367 "params": { 00:06:12.367 "node_base": "iqn.2016-06.io.spdk", 00:06:12.367 "max_sessions": 128, 00:06:12.367 "max_connections_per_session": 2, 00:06:12.367 "max_queue_depth": 64, 00:06:12.367 "default_time2wait": 2, 00:06:12.367 "default_time2retain": 20, 00:06:12.367 "first_burst_length": 8192, 00:06:12.368 "immediate_data": true, 00:06:12.368 "allow_duplicated_isid": false, 00:06:12.368 "error_recovery_level": 0, 00:06:12.368 "nop_timeout": 60, 00:06:12.368 "nop_in_interval": 30, 00:06:12.368 "disable_chap": false, 00:06:12.368 "require_chap": false, 00:06:12.368 "mutual_chap": false, 00:06:12.368 "chap_group": 0, 00:06:12.368 "max_large_datain_per_connection": 64, 00:06:12.368 "max_r2t_per_connection": 4, 00:06:12.368 "pdu_pool_size": 36864, 00:06:12.368 "immediate_data_pool_size": 16384, 00:06:12.368 "data_out_pool_size": 2048 00:06:12.368 } 00:06:12.368 } 00:06:12.368 ] 00:06:12.368 } 00:06:12.368 ] 00:06:12.368 } 00:06:12.368 22:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:12.368 22:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 72541 00:06:12.368 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 72541 ']' 00:06:12.368 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 72541 00:06:12.368 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:12.368 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.368 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72541 00:06:12.368 killing process with pid 72541 00:06:12.368 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.368 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.368 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72541' 00:06:12.368 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 72541 00:06:12.368 22:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 72541 00:06:12.935 22:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=72574 00:06:12.935 22:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:12.935 22:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:18.206 22:07:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 72574 00:06:18.206 22:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 72574 ']' 00:06:18.206 22:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 72574 00:06:18.206 22:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:18.206 22:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.206 22:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72574 00:06:18.206 killing process with pid 72574 00:06:18.206 22:07:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.206 22:07:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.206 22:07:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72574' 00:06:18.206 22:07:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 72574 00:06:18.206 22:07:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 72574 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:18.465 00:06:18.465 real 0m7.425s 00:06:18.465 user 0m6.912s 00:06:18.465 sys 0m0.916s 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.465 ************************************ 00:06:18.465 END TEST skip_rpc_with_json 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.465 ************************************ 00:06:18.465 22:07:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:18.465 22:07:50 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.465 22:07:50 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.465 22:07:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.465 ************************************ 00:06:18.465 START TEST skip_rpc_with_delay 00:06:18.465 ************************************ 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:18.465 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.724 [2024-07-23 22:07:50.715387] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:18.724 [2024-07-23 22:07:50.715542] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:18.724 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:18.724 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.724 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:18.724 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.724 00:06:18.724 real 0m0.095s 00:06:18.724 user 0m0.052s 00:06:18.724 sys 0m0.042s 00:06:18.724 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.724 ************************************ 00:06:18.724 END TEST skip_rpc_with_delay 00:06:18.724 22:07:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:18.724 ************************************ 00:06:18.724 22:07:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:18.724 22:07:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:18.724 22:07:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:18.724 22:07:50 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.724 22:07:50 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.724 22:07:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.724 ************************************ 00:06:18.724 START TEST exit_on_failed_rpc_init 00:06:18.724 ************************************ 00:06:18.724 22:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:18.724 22:07:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=72689 00:06:18.724 22:07:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 72689 00:06:18.724 22:07:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.724 22:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 72689 ']' 00:06:18.724 22:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.724 22:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.724 22:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.724 22:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.724 22:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:18.724 [2024-07-23 22:07:50.868435] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:18.724 [2024-07-23 22:07:50.868541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72689 ] 00:06:18.992 [2024-07-23 22:07:50.996467] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.992 [2024-07-23 22:07:51.014657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.992 [2024-07-23 22:07:51.093177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.992 [2024-07-23 22:07:51.171029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:19.925 22:07:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.925 [2024-07-23 22:07:51.886918] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:19.925 [2024-07-23 22:07:51.887025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72707 ] 00:06:19.925 [2024-07-23 22:07:52.014214] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.925 [2024-07-23 22:07:52.036952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.925 [2024-07-23 22:07:52.092905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.925 [2024-07-23 22:07:52.093015] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:19.925 [2024-07-23 22:07:52.093032] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:19.925 [2024-07-23 22:07:52.093045] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 72689 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 72689 ']' 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 72689 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72689 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72689' 00:06:20.184 killing process with pid 72689 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 72689 00:06:20.184 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 72689 00:06:20.752 00:06:20.752 real 0m1.956s 00:06:20.752 user 0m2.062s 00:06:20.752 sys 0m0.571s 00:06:20.752 ************************************ 00:06:20.752 END TEST exit_on_failed_rpc_init 00:06:20.752 ************************************ 00:06:20.752 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.752 22:07:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.752 22:07:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:20.752 ************************************ 00:06:20.752 END TEST skip_rpc 00:06:20.752 ************************************ 00:06:20.752 00:06:20.752 real 0m15.421s 00:06:20.752 user 0m14.252s 00:06:20.752 sys 0m2.152s 00:06:20.752 22:07:52 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.752 22:07:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.752 22:07:52 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:20.752 22:07:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.752 22:07:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.752 22:07:52 -- common/autotest_common.sh@10 -- # set +x 00:06:20.752 ************************************ 00:06:20.752 START TEST rpc_client 00:06:20.752 ************************************ 00:06:20.752 22:07:52 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:21.012 * Looking for test storage... 00:06:21.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:21.012 22:07:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:21.012 OK 00:06:21.012 22:07:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:21.012 00:06:21.012 real 0m0.120s 00:06:21.012 user 0m0.055s 00:06:21.012 sys 0m0.073s 00:06:21.012 22:07:52 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.012 ************************************ 00:06:21.012 END TEST rpc_client 00:06:21.012 ************************************ 00:06:21.012 22:07:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:21.012 22:07:53 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:21.012 22:07:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.012 22:07:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.012 22:07:53 -- common/autotest_common.sh@10 -- # set +x 00:06:21.012 ************************************ 00:06:21.012 START TEST json_config 00:06:21.012 ************************************ 00:06:21.012 22:07:53 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:21.012 22:07:53 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:870cc518-1c26-4e82-9298-fb61f38a7fd8 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=870cc518-1c26-4e82-9298-fb61f38a7fd8 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.012 22:07:53 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:21.012 22:07:53 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.012 22:07:53 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.012 22:07:53 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.012 22:07:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.013 22:07:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.013 22:07:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.013 22:07:53 json_config -- paths/export.sh@5 -- # export PATH 00:06:21.013 22:07:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.013 22:07:53 json_config -- nvmf/common.sh@47 -- # : 0 00:06:21.013 22:07:53 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.013 22:07:53 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.013 22:07:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.013 22:07:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.013 22:07:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.013 22:07:53 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.013 22:07:53 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.013 22:07:53 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@11 -- # [[ 1 -eq 1 ]] 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:06:21.013 22:07:53 json_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:21.013 INFO: JSON configuration test init 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:21.013 22:07:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:21.013 22:07:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:21.013 22:07:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:21.013 22:07:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.013 22:07:53 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:21.013 22:07:53 json_config -- json_config/common.sh@9 -- # local app=target 00:06:21.013 22:07:53 json_config -- json_config/common.sh@10 -- # shift 00:06:21.013 22:07:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:21.013 22:07:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:21.013 22:07:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:21.013 22:07:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.013 Waiting for target to run... 00:06:21.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:21.013 22:07:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.013 22:07:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72825 00:06:21.013 22:07:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:21.013 22:07:53 json_config -- json_config/common.sh@25 -- # waitforlisten 72825 /var/tmp/spdk_tgt.sock 00:06:21.013 22:07:53 json_config -- common/autotest_common.sh@829 -- # '[' -z 72825 ']' 00:06:21.013 22:07:53 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:21.013 22:07:53 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.013 22:07:53 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:21.013 22:07:53 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:21.013 22:07:53 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.013 22:07:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.283 [2024-07-23 22:07:53.221799] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:21.283 [2024-07-23 22:07:53.222171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72825 ] 00:06:21.552 [2024-07-23 22:07:53.586007] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.552 [2024-07-23 22:07:53.602924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.552 [2024-07-23 22:07:53.644371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.118 00:06:22.118 22:07:54 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.118 22:07:54 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:22.118 22:07:54 json_config -- json_config/common.sh@26 -- # echo '' 00:06:22.118 22:07:54 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:22.118 22:07:54 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:22.118 22:07:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:22.118 22:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.118 22:07:54 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:22.118 22:07:54 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:22.118 22:07:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:22.118 22:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.118 22:07:54 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:22.118 22:07:54 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:22.118 22:07:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:22.377 [2024-07-23 22:07:54.532013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.637 22:07:54 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:22.637 22:07:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:22.637 22:07:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:22.637 22:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.637 22:07:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:22.637 22:07:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:22.637 22:07:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:22.637 22:07:54 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:22.637 22:07:54 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:22.637 22:07:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@51 -- # sort 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:22.896 22:07:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:22.896 22:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@291 -- # create_iscsi_subsystem_config 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@225 -- # timing_enter create_iscsi_subsystem_config 00:06:22.896 22:07:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:22.896 22:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.896 22:07:54 json_config -- json_config/json_config.sh@226 -- # tgt_rpc bdev_malloc_create 64 1024 --name MallocForIscsi0 00:06:22.896 22:07:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 64 1024 --name MallocForIscsi0 00:06:23.155 MallocForIscsi0 00:06:23.155 22:07:55 json_config -- json_config/json_config.sh@227 -- # tgt_rpc iscsi_create_portal_group 1 127.0.0.1:3260 00:06:23.155 22:07:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_portal_group 1 127.0.0.1:3260 00:06:23.414 22:07:55 json_config -- json_config/json_config.sh@228 -- # tgt_rpc iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:06:23.414 22:07:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:06:23.414 22:07:55 json_config -- json_config/json_config.sh@229 -- # tgt_rpc iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:06:23.414 22:07:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:06:23.673 22:07:55 json_config -- json_config/json_config.sh@230 -- # timing_exit create_iscsi_subsystem_config 00:06:23.673 22:07:55 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:23.673 22:07:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.673 22:07:55 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:06:23.673 22:07:55 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:23.673 22:07:55 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:23.673 22:07:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.673 22:07:55 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:23.673 22:07:55 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:23.673 22:07:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:23.932 MallocBdevForConfigChangeCheck 00:06:24.190 22:07:56 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:24.190 22:07:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.190 22:07:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.190 22:07:56 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:24.190 22:07:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.448 INFO: shutting down applications... 00:06:24.448 22:07:56 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:24.448 22:07:56 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:24.448 22:07:56 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:24.448 22:07:56 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:24.448 22:07:56 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:24.706 Calling clear_iscsi_subsystem 00:06:24.706 Calling clear_nvmf_subsystem 00:06:24.706 Calling clear_nbd_subsystem 00:06:24.706 Calling clear_ublk_subsystem 00:06:24.706 Calling clear_vhost_blk_subsystem 00:06:24.706 Calling clear_vhost_scsi_subsystem 00:06:24.706 Calling clear_bdev_subsystem 00:06:24.706 22:07:56 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:24.706 22:07:56 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:24.706 22:07:56 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:24.706 22:07:56 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.706 22:07:56 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:24.706 22:07:56 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:25.273 22:07:57 json_config -- json_config/json_config.sh@349 -- # break 00:06:25.273 22:07:57 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:25.273 22:07:57 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:25.273 22:07:57 json_config -- json_config/common.sh@31 -- # local app=target 00:06:25.273 22:07:57 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:25.273 22:07:57 json_config -- json_config/common.sh@35 -- # [[ -n 72825 ]] 00:06:25.273 22:07:57 json_config -- json_config/common.sh@38 -- # kill -SIGINT 72825 00:06:25.273 22:07:57 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:25.273 22:07:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.273 22:07:57 json_config -- json_config/common.sh@41 -- # kill -0 72825 00:06:25.273 22:07:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:25.532 22:07:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:25.532 22:07:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.532 22:07:57 json_config -- json_config/common.sh@41 -- # kill -0 72825 00:06:25.532 SPDK target shutdown done 00:06:25.532 INFO: relaunching applications... 00:06:25.532 22:07:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:25.532 22:07:57 json_config -- json_config/common.sh@43 -- # break 00:06:25.532 22:07:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:25.532 22:07:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:25.532 22:07:57 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:25.532 22:07:57 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:25.532 22:07:57 json_config -- json_config/common.sh@9 -- # local app=target 00:06:25.532 22:07:57 json_config -- json_config/common.sh@10 -- # shift 00:06:25.532 22:07:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.532 22:07:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.532 Waiting for target to run... 00:06:25.532 22:07:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.532 22:07:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.532 22:07:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.532 22:07:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=73001 00:06:25.532 22:07:57 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:25.532 22:07:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.532 22:07:57 json_config -- json_config/common.sh@25 -- # waitforlisten 73001 /var/tmp/spdk_tgt.sock 00:06:25.532 22:07:57 json_config -- common/autotest_common.sh@829 -- # '[' -z 73001 ']' 00:06:25.532 22:07:57 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.532 22:07:57 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.532 22:07:57 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.532 22:07:57 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.532 22:07:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.532 [2024-07-23 22:07:57.723911] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:25.532 [2024-07-23 22:07:57.724240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73001 ] 00:06:26.099 [2024-07-23 22:07:58.060075] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:26.099 [2024-07-23 22:07:58.078150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.099 [2024-07-23 22:07:58.123346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.099 [2024-07-23 22:07:58.249011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.666 00:06:26.666 INFO: Checking if target configuration is the same... 00:06:26.666 22:07:58 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.666 22:07:58 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:26.666 22:07:58 json_config -- json_config/common.sh@26 -- # echo '' 00:06:26.666 22:07:58 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:26.666 22:07:58 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:26.666 22:07:58 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:26.666 22:07:58 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:26.666 22:07:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.666 + '[' 2 -ne 2 ']' 00:06:26.666 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:26.666 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:26.666 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:26.666 +++ basename /dev/fd/62 00:06:26.666 ++ mktemp /tmp/62.XXX 00:06:26.666 + tmp_file_1=/tmp/62.bzV 00:06:26.666 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:26.666 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:26.666 + tmp_file_2=/tmp/spdk_tgt_config.json.q9X 00:06:26.666 + ret=0 00:06:26.666 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:26.924 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:26.924 + diff -u /tmp/62.bzV /tmp/spdk_tgt_config.json.q9X 00:06:26.924 INFO: JSON config files are the same 00:06:26.924 + echo 'INFO: JSON config files are the same' 00:06:26.924 + rm /tmp/62.bzV /tmp/spdk_tgt_config.json.q9X 00:06:26.924 + exit 0 00:06:26.924 INFO: changing configuration and checking if this can be detected... 00:06:26.924 22:07:58 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:26.924 22:07:58 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:26.924 22:07:58 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:26.924 22:07:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:27.183 22:07:59 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:27.183 22:07:59 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:27.183 22:07:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:27.183 + '[' 2 -ne 2 ']' 00:06:27.183 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:27.183 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:27.183 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:27.183 +++ basename /dev/fd/62 00:06:27.183 ++ mktemp /tmp/62.XXX 00:06:27.183 + tmp_file_1=/tmp/62.RA8 00:06:27.183 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:27.183 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:27.183 + tmp_file_2=/tmp/spdk_tgt_config.json.tli 00:06:27.183 + ret=0 00:06:27.183 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:27.441 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:27.441 + diff -u /tmp/62.RA8 /tmp/spdk_tgt_config.json.tli 00:06:27.441 + ret=1 00:06:27.441 + echo '=== Start of file: /tmp/62.RA8 ===' 00:06:27.441 + cat /tmp/62.RA8 00:06:27.441 + echo '=== End of file: /tmp/62.RA8 ===' 00:06:27.441 + echo '' 00:06:27.441 + echo '=== Start of file: /tmp/spdk_tgt_config.json.tli ===' 00:06:27.441 + cat /tmp/spdk_tgt_config.json.tli 00:06:27.441 + echo '=== End of file: /tmp/spdk_tgt_config.json.tli ===' 00:06:27.441 + echo '' 00:06:27.441 + rm /tmp/62.RA8 /tmp/spdk_tgt_config.json.tli 00:06:27.441 + exit 1 00:06:27.441 INFO: configuration change detected. 00:06:27.441 22:07:59 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:27.441 22:07:59 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:27.441 22:07:59 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:27.441 22:07:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:27.441 22:07:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.441 22:07:59 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:27.441 22:07:59 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:27.441 22:07:59 json_config -- json_config/json_config.sh@321 -- # [[ -n 73001 ]] 00:06:27.441 22:07:59 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:27.441 22:07:59 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:27.441 22:07:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:27.441 22:07:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.441 22:07:59 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:27.441 22:07:59 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:27.441 22:07:59 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:27.441 22:07:59 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:27.441 22:07:59 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:27.441 22:07:59 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:27.441 22:07:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:27.441 22:07:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.699 22:07:59 json_config -- json_config/json_config.sh@327 -- # killprocess 73001 00:06:27.699 22:07:59 json_config -- common/autotest_common.sh@948 -- # '[' -z 73001 ']' 00:06:27.699 22:07:59 json_config -- common/autotest_common.sh@952 -- # kill -0 73001 00:06:27.699 22:07:59 json_config -- common/autotest_common.sh@953 -- # uname 00:06:27.699 22:07:59 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.699 22:07:59 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73001 00:06:27.699 killing process with pid 73001 00:06:27.699 22:07:59 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.699 22:07:59 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.699 22:07:59 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73001' 00:06:27.699 22:07:59 json_config -- common/autotest_common.sh@967 -- # kill 73001 00:06:27.699 22:07:59 json_config -- common/autotest_common.sh@972 -- # wait 73001 00:06:27.957 22:08:00 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:27.957 22:08:00 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:27.957 22:08:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:27.957 22:08:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.957 INFO: Success 00:06:27.957 22:08:00 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:27.957 22:08:00 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:27.957 00:06:27.957 real 0m7.084s 00:06:27.957 user 0m9.420s 00:06:27.957 sys 0m1.706s 00:06:27.957 ************************************ 00:06:27.957 END TEST json_config 00:06:27.957 ************************************ 00:06:27.957 22:08:00 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.957 22:08:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.216 22:08:00 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:28.216 22:08:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.216 22:08:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.216 22:08:00 -- common/autotest_common.sh@10 -- # set +x 00:06:28.216 ************************************ 00:06:28.216 START TEST json_config_extra_key 00:06:28.216 ************************************ 00:06:28.216 22:08:00 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:28.216 22:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:870cc518-1c26-4e82-9298-fb61f38a7fd8 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=870cc518-1c26-4e82-9298-fb61f38a7fd8 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:28.216 22:08:00 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.216 22:08:00 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.216 22:08:00 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.216 22:08:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.216 22:08:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.216 22:08:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.216 22:08:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:28.216 22:08:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:28.216 22:08:00 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:28.216 22:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:28.216 22:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:28.216 22:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:28.216 22:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:28.216 22:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:28.216 22:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:28.217 22:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:28.217 22:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:28.217 22:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:28.217 INFO: launching applications... 00:06:28.217 22:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:28.217 22:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:28.217 22:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:28.217 Waiting for target to run... 00:06:28.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:28.217 22:08:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:28.217 22:08:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:28.217 22:08:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:28.217 22:08:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:28.217 22:08:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:28.217 22:08:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:28.217 22:08:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:28.217 22:08:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=73147 00:06:28.217 22:08:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:28.217 22:08:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 73147 /var/tmp/spdk_tgt.sock 00:06:28.217 22:08:00 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 73147 ']' 00:06:28.217 22:08:00 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:28.217 22:08:00 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.217 22:08:00 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:28.217 22:08:00 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:28.217 22:08:00 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.217 22:08:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:28.217 [2024-07-23 22:08:00.349949] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:28.217 [2024-07-23 22:08:00.350336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73147 ] 00:06:28.781 [2024-07-23 22:08:00.713524] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.781 [2024-07-23 22:08:00.734916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.781 [2024-07-23 22:08:00.783558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.781 [2024-07-23 22:08:00.806532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.347 22:08:01 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.347 00:06:29.347 INFO: shutting down applications... 00:06:29.347 22:08:01 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:29.347 22:08:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:29.347 22:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:29.347 22:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:29.347 22:08:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:29.347 22:08:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:29.347 22:08:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 73147 ]] 00:06:29.347 22:08:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 73147 00:06:29.347 22:08:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:29.347 22:08:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.347 22:08:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 73147 00:06:29.347 22:08:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.912 22:08:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.912 22:08:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.912 22:08:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 73147 00:06:29.912 22:08:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:30.480 22:08:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:30.480 22:08:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.480 22:08:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 73147 00:06:30.480 22:08:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:30.480 22:08:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:30.480 SPDK target shutdown done 00:06:30.480 22:08:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:30.480 22:08:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:30.480 Success 00:06:30.480 22:08:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:30.480 ************************************ 00:06:30.480 END TEST json_config_extra_key 00:06:30.480 ************************************ 00:06:30.480 00:06:30.480 real 0m2.190s 00:06:30.480 user 0m1.798s 00:06:30.480 sys 0m0.434s 00:06:30.480 22:08:02 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.480 22:08:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:30.480 22:08:02 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.480 22:08:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.480 22:08:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.480 22:08:02 -- common/autotest_common.sh@10 -- # set +x 00:06:30.480 ************************************ 00:06:30.480 START TEST alias_rpc 00:06:30.480 ************************************ 00:06:30.480 22:08:02 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.480 * Looking for test storage... 00:06:30.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:30.480 22:08:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:30.480 22:08:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=73218 00:06:30.480 22:08:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 73218 00:06:30.480 22:08:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.480 22:08:02 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 73218 ']' 00:06:30.480 22:08:02 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.480 22:08:02 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.480 22:08:02 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.480 22:08:02 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.480 22:08:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.480 [2024-07-23 22:08:02.603710] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:30.480 [2024-07-23 22:08:02.603811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73218 ] 00:06:30.738 [2024-07-23 22:08:02.730998] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.738 [2024-07-23 22:08:02.751056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.738 [2024-07-23 22:08:02.842825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.738 [2024-07-23 22:08:02.928505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.673 22:08:03 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.673 22:08:03 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:31.673 22:08:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:31.931 22:08:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 73218 00:06:31.931 22:08:03 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 73218 ']' 00:06:31.931 22:08:03 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 73218 00:06:31.931 22:08:03 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:31.931 22:08:03 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.931 22:08:03 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73218 00:06:31.931 killing process with pid 73218 00:06:31.931 22:08:03 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.931 22:08:03 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.931 22:08:03 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73218' 00:06:31.931 22:08:03 alias_rpc -- common/autotest_common.sh@967 -- # kill 73218 00:06:31.931 22:08:03 alias_rpc -- common/autotest_common.sh@972 -- # wait 73218 00:06:32.534 ************************************ 00:06:32.534 END TEST alias_rpc 00:06:32.534 ************************************ 00:06:32.534 00:06:32.534 real 0m2.026s 00:06:32.534 user 0m2.101s 00:06:32.534 sys 0m0.625s 00:06:32.534 22:08:04 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.534 22:08:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.534 22:08:04 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:32.534 22:08:04 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:32.534 22:08:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.534 22:08:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.534 22:08:04 -- common/autotest_common.sh@10 -- # set +x 00:06:32.534 ************************************ 00:06:32.534 START TEST spdkcli_tcp 00:06:32.534 ************************************ 00:06:32.534 22:08:04 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:32.534 * Looking for test storage... 00:06:32.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:32.534 22:08:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:32.534 22:08:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:32.534 22:08:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:32.534 22:08:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:32.534 22:08:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:32.534 22:08:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:32.534 22:08:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:32.534 22:08:04 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:32.534 22:08:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.534 22:08:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=73294 00:06:32.534 22:08:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:32.534 22:08:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 73294 00:06:32.534 22:08:04 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 73294 ']' 00:06:32.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.534 22:08:04 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.534 22:08:04 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.534 22:08:04 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.534 22:08:04 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.534 22:08:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.534 [2024-07-23 22:08:04.715510] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:32.534 [2024-07-23 22:08:04.715641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73294 ] 00:06:32.792 [2024-07-23 22:08:04.844155] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.792 [2024-07-23 22:08:04.859417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.792 [2024-07-23 22:08:04.983038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.792 [2024-07-23 22:08:04.983069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.050 [2024-07-23 22:08:05.084163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.617 22:08:05 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.617 22:08:05 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:33.617 22:08:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:33.617 22:08:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=73311 00:06:33.617 22:08:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:33.877 [ 00:06:33.877 "bdev_malloc_delete", 00:06:33.877 "bdev_malloc_create", 00:06:33.877 "bdev_null_resize", 00:06:33.877 "bdev_null_delete", 00:06:33.877 "bdev_null_create", 00:06:33.877 "bdev_nvme_cuse_unregister", 00:06:33.877 "bdev_nvme_cuse_register", 00:06:33.877 "bdev_opal_new_user", 00:06:33.877 "bdev_opal_set_lock_state", 00:06:33.877 "bdev_opal_delete", 00:06:33.877 "bdev_opal_get_info", 00:06:33.877 "bdev_opal_create", 00:06:33.877 "bdev_nvme_opal_revert", 00:06:33.877 "bdev_nvme_opal_init", 00:06:33.877 "bdev_nvme_send_cmd", 00:06:33.877 "bdev_nvme_get_path_iostat", 00:06:33.877 "bdev_nvme_get_mdns_discovery_info", 00:06:33.877 "bdev_nvme_stop_mdns_discovery", 00:06:33.877 "bdev_nvme_start_mdns_discovery", 00:06:33.877 "bdev_nvme_set_multipath_policy", 00:06:33.877 "bdev_nvme_set_preferred_path", 00:06:33.877 "bdev_nvme_get_io_paths", 00:06:33.877 "bdev_nvme_remove_error_injection", 00:06:33.877 "bdev_nvme_add_error_injection", 00:06:33.877 "bdev_nvme_get_discovery_info", 00:06:33.877 "bdev_nvme_stop_discovery", 00:06:33.877 "bdev_nvme_start_discovery", 00:06:33.877 "bdev_nvme_get_controller_health_info", 00:06:33.877 "bdev_nvme_disable_controller", 00:06:33.877 "bdev_nvme_enable_controller", 00:06:33.877 "bdev_nvme_reset_controller", 00:06:33.877 "bdev_nvme_get_transport_statistics", 00:06:33.877 "bdev_nvme_apply_firmware", 00:06:33.877 "bdev_nvme_detach_controller", 00:06:33.877 "bdev_nvme_get_controllers", 00:06:33.877 "bdev_nvme_attach_controller", 00:06:33.877 "bdev_nvme_set_hotplug", 00:06:33.877 "bdev_nvme_set_options", 00:06:33.877 "bdev_passthru_delete", 00:06:33.877 "bdev_passthru_create", 00:06:33.877 "bdev_lvol_set_parent_bdev", 00:06:33.877 "bdev_lvol_set_parent", 00:06:33.877 "bdev_lvol_check_shallow_copy", 00:06:33.877 "bdev_lvol_start_shallow_copy", 00:06:33.877 "bdev_lvol_grow_lvstore", 00:06:33.877 "bdev_lvol_get_lvols", 00:06:33.877 "bdev_lvol_get_lvstores", 00:06:33.877 "bdev_lvol_delete", 00:06:33.877 "bdev_lvol_set_read_only", 00:06:33.877 "bdev_lvol_resize", 00:06:33.877 "bdev_lvol_decouple_parent", 00:06:33.877 "bdev_lvol_inflate", 00:06:33.877 "bdev_lvol_rename", 00:06:33.877 "bdev_lvol_clone_bdev", 00:06:33.877 "bdev_lvol_clone", 00:06:33.877 "bdev_lvol_snapshot", 00:06:33.877 "bdev_lvol_create", 00:06:33.877 "bdev_lvol_delete_lvstore", 00:06:33.877 "bdev_lvol_rename_lvstore", 00:06:33.877 "bdev_lvol_create_lvstore", 00:06:33.877 "bdev_raid_set_options", 00:06:33.877 "bdev_raid_remove_base_bdev", 00:06:33.877 "bdev_raid_add_base_bdev", 00:06:33.877 "bdev_raid_delete", 00:06:33.877 "bdev_raid_create", 00:06:33.877 "bdev_raid_get_bdevs", 00:06:33.877 "bdev_error_inject_error", 00:06:33.877 "bdev_error_delete", 00:06:33.877 "bdev_error_create", 00:06:33.877 "bdev_split_delete", 00:06:33.877 "bdev_split_create", 00:06:33.877 "bdev_delay_delete", 00:06:33.877 "bdev_delay_create", 00:06:33.877 "bdev_delay_update_latency", 00:06:33.877 "bdev_zone_block_delete", 00:06:33.877 "bdev_zone_block_create", 00:06:33.877 "blobfs_create", 00:06:33.877 "blobfs_detect", 00:06:33.877 "blobfs_set_cache_size", 00:06:33.877 "bdev_aio_delete", 00:06:33.877 "bdev_aio_rescan", 00:06:33.877 "bdev_aio_create", 00:06:33.877 "bdev_ftl_set_property", 00:06:33.877 "bdev_ftl_get_properties", 00:06:33.877 "bdev_ftl_get_stats", 00:06:33.877 "bdev_ftl_unmap", 00:06:33.877 "bdev_ftl_unload", 00:06:33.877 "bdev_ftl_delete", 00:06:33.877 "bdev_ftl_load", 00:06:33.877 "bdev_ftl_create", 00:06:33.877 "bdev_virtio_attach_controller", 00:06:33.877 "bdev_virtio_scsi_get_devices", 00:06:33.877 "bdev_virtio_detach_controller", 00:06:33.877 "bdev_virtio_blk_set_hotplug", 00:06:33.877 "bdev_iscsi_delete", 00:06:33.877 "bdev_iscsi_create", 00:06:33.877 "bdev_iscsi_set_options", 00:06:33.877 "bdev_uring_delete", 00:06:33.877 "bdev_uring_rescan", 00:06:33.877 "bdev_uring_create", 00:06:33.877 "accel_error_inject_error", 00:06:33.877 "ioat_scan_accel_module", 00:06:33.877 "dsa_scan_accel_module", 00:06:33.877 "iaa_scan_accel_module", 00:06:33.877 "keyring_file_remove_key", 00:06:33.877 "keyring_file_add_key", 00:06:33.877 "keyring_linux_set_options", 00:06:33.877 "iscsi_get_histogram", 00:06:33.877 "iscsi_enable_histogram", 00:06:33.877 "iscsi_set_options", 00:06:33.877 "iscsi_get_auth_groups", 00:06:33.877 "iscsi_auth_group_remove_secret", 00:06:33.877 "iscsi_auth_group_add_secret", 00:06:33.877 "iscsi_delete_auth_group", 00:06:33.877 "iscsi_create_auth_group", 00:06:33.877 "iscsi_set_discovery_auth", 00:06:33.877 "iscsi_get_options", 00:06:33.877 "iscsi_target_node_request_logout", 00:06:33.877 "iscsi_target_node_set_redirect", 00:06:33.877 "iscsi_target_node_set_auth", 00:06:33.877 "iscsi_target_node_add_lun", 00:06:33.877 "iscsi_get_stats", 00:06:33.877 "iscsi_get_connections", 00:06:33.877 "iscsi_portal_group_set_auth", 00:06:33.877 "iscsi_start_portal_group", 00:06:33.877 "iscsi_delete_portal_group", 00:06:33.877 "iscsi_create_portal_group", 00:06:33.877 "iscsi_get_portal_groups", 00:06:33.877 "iscsi_delete_target_node", 00:06:33.877 "iscsi_target_node_remove_pg_ig_maps", 00:06:33.877 "iscsi_target_node_add_pg_ig_maps", 00:06:33.877 "iscsi_create_target_node", 00:06:33.877 "iscsi_get_target_nodes", 00:06:33.877 "iscsi_delete_initiator_group", 00:06:33.877 "iscsi_initiator_group_remove_initiators", 00:06:33.877 "iscsi_initiator_group_add_initiators", 00:06:33.877 "iscsi_create_initiator_group", 00:06:33.877 "iscsi_get_initiator_groups", 00:06:33.877 "nvmf_set_crdt", 00:06:33.877 "nvmf_set_config", 00:06:33.877 "nvmf_set_max_subsystems", 00:06:33.877 "nvmf_stop_mdns_prr", 00:06:33.877 "nvmf_publish_mdns_prr", 00:06:33.877 "nvmf_subsystem_get_listeners", 00:06:33.877 "nvmf_subsystem_get_qpairs", 00:06:33.877 "nvmf_subsystem_get_controllers", 00:06:33.877 "nvmf_get_stats", 00:06:33.877 "nvmf_get_transports", 00:06:33.877 "nvmf_create_transport", 00:06:33.877 "nvmf_get_targets", 00:06:33.877 "nvmf_delete_target", 00:06:33.877 "nvmf_create_target", 00:06:33.877 "nvmf_subsystem_allow_any_host", 00:06:33.877 "nvmf_subsystem_remove_host", 00:06:33.877 "nvmf_subsystem_add_host", 00:06:33.877 "nvmf_ns_remove_host", 00:06:33.877 "nvmf_ns_add_host", 00:06:33.877 "nvmf_subsystem_remove_ns", 00:06:33.877 "nvmf_subsystem_add_ns", 00:06:33.877 "nvmf_subsystem_listener_set_ana_state", 00:06:33.877 "nvmf_discovery_get_referrals", 00:06:33.877 "nvmf_discovery_remove_referral", 00:06:33.877 "nvmf_discovery_add_referral", 00:06:33.877 "nvmf_subsystem_remove_listener", 00:06:33.877 "nvmf_subsystem_add_listener", 00:06:33.877 "nvmf_delete_subsystem", 00:06:33.877 "nvmf_create_subsystem", 00:06:33.878 "nvmf_get_subsystems", 00:06:33.878 "env_dpdk_get_mem_stats", 00:06:33.878 "nbd_get_disks", 00:06:33.878 "nbd_stop_disk", 00:06:33.878 "nbd_start_disk", 00:06:33.878 "ublk_recover_disk", 00:06:33.878 "ublk_get_disks", 00:06:33.878 "ublk_stop_disk", 00:06:33.878 "ublk_start_disk", 00:06:33.878 "ublk_destroy_target", 00:06:33.878 "ublk_create_target", 00:06:33.878 "virtio_blk_create_transport", 00:06:33.878 "virtio_blk_get_transports", 00:06:33.878 "vhost_controller_set_coalescing", 00:06:33.878 "vhost_get_controllers", 00:06:33.878 "vhost_delete_controller", 00:06:33.878 "vhost_create_blk_controller", 00:06:33.878 "vhost_scsi_controller_remove_target", 00:06:33.878 "vhost_scsi_controller_add_target", 00:06:33.878 "vhost_start_scsi_controller", 00:06:33.878 "vhost_create_scsi_controller", 00:06:33.878 "thread_set_cpumask", 00:06:33.878 "framework_get_governor", 00:06:33.878 "framework_get_scheduler", 00:06:33.878 "framework_set_scheduler", 00:06:33.878 "framework_get_reactors", 00:06:33.878 "thread_get_io_channels", 00:06:33.878 "thread_get_pollers", 00:06:33.878 "thread_get_stats", 00:06:33.878 "framework_monitor_context_switch", 00:06:33.878 "spdk_kill_instance", 00:06:33.878 "log_enable_timestamps", 00:06:33.878 "log_get_flags", 00:06:33.878 "log_clear_flag", 00:06:33.878 "log_set_flag", 00:06:33.878 "log_get_level", 00:06:33.878 "log_set_level", 00:06:33.878 "log_get_print_level", 00:06:33.878 "log_set_print_level", 00:06:33.878 "framework_enable_cpumask_locks", 00:06:33.878 "framework_disable_cpumask_locks", 00:06:33.878 "framework_wait_init", 00:06:33.878 "framework_start_init", 00:06:33.878 "scsi_get_devices", 00:06:33.878 "bdev_get_histogram", 00:06:33.878 "bdev_enable_histogram", 00:06:33.878 "bdev_set_qos_limit", 00:06:33.878 "bdev_set_qd_sampling_period", 00:06:33.878 "bdev_get_bdevs", 00:06:33.878 "bdev_reset_iostat", 00:06:33.878 "bdev_get_iostat", 00:06:33.878 "bdev_examine", 00:06:33.878 "bdev_wait_for_examine", 00:06:33.878 "bdev_set_options", 00:06:33.878 "notify_get_notifications", 00:06:33.878 "notify_get_types", 00:06:33.878 "accel_get_stats", 00:06:33.878 "accel_set_options", 00:06:33.878 "accel_set_driver", 00:06:33.878 "accel_crypto_key_destroy", 00:06:33.878 "accel_crypto_keys_get", 00:06:33.878 "accel_crypto_key_create", 00:06:33.878 "accel_assign_opc", 00:06:33.878 "accel_get_module_info", 00:06:33.878 "accel_get_opc_assignments", 00:06:33.878 "vmd_rescan", 00:06:33.878 "vmd_remove_device", 00:06:33.878 "vmd_enable", 00:06:33.878 "sock_get_default_impl", 00:06:33.878 "sock_set_default_impl", 00:06:33.878 "sock_impl_set_options", 00:06:33.878 "sock_impl_get_options", 00:06:33.878 "iobuf_get_stats", 00:06:33.878 "iobuf_set_options", 00:06:33.878 "framework_get_pci_devices", 00:06:33.878 "framework_get_config", 00:06:33.878 "framework_get_subsystems", 00:06:33.878 "trace_get_info", 00:06:33.878 "trace_get_tpoint_group_mask", 00:06:33.878 "trace_disable_tpoint_group", 00:06:33.878 "trace_enable_tpoint_group", 00:06:33.878 "trace_clear_tpoint_mask", 00:06:33.878 "trace_set_tpoint_mask", 00:06:33.878 "keyring_get_keys", 00:06:33.878 "spdk_get_version", 00:06:33.878 "rpc_get_methods" 00:06:33.878 ] 00:06:33.878 22:08:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:33.878 22:08:05 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:33.878 22:08:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.878 22:08:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:33.878 22:08:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 73294 00:06:33.878 22:08:05 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 73294 ']' 00:06:33.878 22:08:05 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 73294 00:06:33.878 22:08:05 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:33.878 22:08:05 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.878 22:08:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73294 00:06:33.878 22:08:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.878 killing process with pid 73294 00:06:33.878 22:08:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.878 22:08:05 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73294' 00:06:33.878 22:08:05 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 73294 00:06:33.878 22:08:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 73294 00:06:34.445 ************************************ 00:06:34.445 END TEST spdkcli_tcp 00:06:34.445 ************************************ 00:06:34.445 00:06:34.445 real 0m2.030s 00:06:34.445 user 0m3.505s 00:06:34.445 sys 0m0.676s 00:06:34.445 22:08:06 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.446 22:08:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.446 22:08:06 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:34.446 22:08:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.446 22:08:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.446 22:08:06 -- common/autotest_common.sh@10 -- # set +x 00:06:34.446 ************************************ 00:06:34.446 START TEST dpdk_mem_utility 00:06:34.446 ************************************ 00:06:34.446 22:08:06 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:34.704 * Looking for test storage... 00:06:34.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:34.704 22:08:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:34.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.704 22:08:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=73385 00:06:34.704 22:08:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 73385 00:06:34.704 22:08:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.704 22:08:06 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 73385 ']' 00:06:34.705 22:08:06 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.705 22:08:06 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.705 22:08:06 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.705 22:08:06 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.705 22:08:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:34.705 [2024-07-23 22:08:06.790522] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:34.705 [2024-07-23 22:08:06.790624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73385 ] 00:06:34.963 [2024-07-23 22:08:06.917269] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:34.963 [2024-07-23 22:08:06.938170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.963 [2024-07-23 22:08:06.994826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.963 [2024-07-23 22:08:07.042736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.899 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.899 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:35.899 22:08:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:35.899 22:08:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:35.899 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.899 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:35.899 { 00:06:35.899 "filename": "/tmp/spdk_mem_dump.txt" 00:06:35.899 } 00:06:35.899 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.899 22:08:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:35.899 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:35.899 1 heaps totaling size 814.000000 MiB 00:06:35.899 size: 814.000000 MiB heap id: 0 00:06:35.899 end heaps---------- 00:06:35.899 8 mempools totaling size 598.116089 MiB 00:06:35.899 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:35.899 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:35.899 size: 84.521057 MiB name: bdev_io_73385 00:06:35.899 size: 51.011292 MiB name: evtpool_73385 00:06:35.899 size: 50.003479 MiB name: msgpool_73385 00:06:35.899 size: 21.763794 MiB name: PDU_Pool 00:06:35.899 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:35.899 size: 0.026123 MiB name: Session_Pool 00:06:35.899 end mempools------- 00:06:35.899 6 memzones totaling size 4.142822 MiB 00:06:35.899 size: 1.000366 MiB name: RG_ring_0_73385 00:06:35.899 size: 1.000366 MiB name: RG_ring_1_73385 00:06:35.899 size: 1.000366 MiB name: RG_ring_4_73385 00:06:35.899 size: 1.000366 MiB name: RG_ring_5_73385 00:06:35.899 size: 0.125366 MiB name: RG_ring_2_73385 00:06:35.899 size: 0.015991 MiB name: RG_ring_3_73385 00:06:35.899 end memzones------- 00:06:35.899 22:08:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:35.899 heap id: 0 total size: 814.000000 MiB number of busy elements: 299 number of free elements: 15 00:06:35.899 list of free elements. size: 12.472107 MiB 00:06:35.899 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:35.899 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:35.899 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:35.899 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:35.899 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:35.899 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:35.899 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:35.899 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:35.899 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:35.899 element at address: 0x20001aa00000 with size: 0.569336 MiB 00:06:35.899 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:35.899 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:35.899 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:35.899 element at address: 0x200027e00000 with size: 0.395935 MiB 00:06:35.899 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:35.899 list of standard malloc elements. size: 199.265320 MiB 00:06:35.899 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:35.899 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:35.899 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:35.899 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:35.899 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:35.899 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:35.899 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:35.899 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:35.899 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:35.899 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:35.899 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:35.899 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:35.899 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:35.899 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:35.899 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:35.899 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:35.899 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:35.899 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:35.899 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:35.899 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:35.900 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:35.900 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:35.900 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:35.900 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e65680 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:35.900 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:35.901 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:35.901 list of memzone associated elements. size: 602.262573 MiB 00:06:35.901 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:35.901 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:35.901 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:35.901 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:35.901 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:35.901 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_73385_0 00:06:35.901 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:35.901 associated memzone info: size: 48.002930 MiB name: MP_evtpool_73385_0 00:06:35.901 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:35.901 associated memzone info: size: 48.002930 MiB name: MP_msgpool_73385_0 00:06:35.901 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:35.901 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:35.901 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:35.901 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:35.901 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:35.901 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_73385 00:06:35.901 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:35.901 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_73385 00:06:35.901 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:35.901 associated memzone info: size: 1.007996 MiB name: MP_evtpool_73385 00:06:35.901 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:35.901 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:35.901 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:35.901 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:35.901 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:35.901 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:35.901 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:35.901 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:35.901 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:35.901 associated memzone info: size: 1.000366 MiB name: RG_ring_0_73385 00:06:35.901 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:35.901 associated memzone info: size: 1.000366 MiB name: RG_ring_1_73385 00:06:35.901 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:35.901 associated memzone info: size: 1.000366 MiB name: RG_ring_4_73385 00:06:35.901 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:35.901 associated memzone info: size: 1.000366 MiB name: RG_ring_5_73385 00:06:35.901 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:35.901 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_73385 00:06:35.901 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:35.901 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:35.901 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:35.901 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:35.901 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:35.901 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:35.901 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:35.901 associated memzone info: size: 0.125366 MiB name: RG_ring_2_73385 00:06:35.901 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:35.901 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:35.901 element at address: 0x200027e65740 with size: 0.023743 MiB 00:06:35.901 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:35.901 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:35.901 associated memzone info: size: 0.015991 MiB name: RG_ring_3_73385 00:06:35.901 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:06:35.901 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:35.901 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:35.901 associated memzone info: size: 0.000183 MiB name: MP_msgpool_73385 00:06:35.901 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:35.901 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_73385 00:06:35.901 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:06:35.901 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:35.901 22:08:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:35.901 22:08:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 73385 00:06:35.901 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 73385 ']' 00:06:35.901 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 73385 00:06:35.901 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:35.901 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.901 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73385 00:06:35.901 killing process with pid 73385 00:06:35.901 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.901 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.902 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73385' 00:06:35.902 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 73385 00:06:35.902 22:08:07 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 73385 00:06:36.160 00:06:36.160 real 0m1.618s 00:06:36.160 user 0m1.757s 00:06:36.160 sys 0m0.433s 00:06:36.160 ************************************ 00:06:36.160 END TEST dpdk_mem_utility 00:06:36.160 ************************************ 00:06:36.160 22:08:08 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.160 22:08:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:36.160 22:08:08 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:36.160 22:08:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.160 22:08:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.160 22:08:08 -- common/autotest_common.sh@10 -- # set +x 00:06:36.160 ************************************ 00:06:36.160 START TEST event 00:06:36.160 ************************************ 00:06:36.160 22:08:08 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:36.417 * Looking for test storage... 00:06:36.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:36.417 22:08:08 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:36.417 22:08:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:36.417 22:08:08 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:36.417 22:08:08 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:36.418 22:08:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.418 22:08:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.418 ************************************ 00:06:36.418 START TEST event_perf 00:06:36.418 ************************************ 00:06:36.418 22:08:08 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:36.418 Running I/O for 1 seconds...[2024-07-23 22:08:08.441882] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:36.418 [2024-07-23 22:08:08.442136] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73462 ] 00:06:36.418 [2024-07-23 22:08:08.568298] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:36.418 [2024-07-23 22:08:08.586121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.676 [2024-07-23 22:08:08.638509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.676 [2024-07-23 22:08:08.638659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.676 [2024-07-23 22:08:08.640045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.676 [2024-07-23 22:08:08.640048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.611 Running I/O for 1 seconds... 00:06:37.611 lcore 0: 196660 00:06:37.611 lcore 1: 196660 00:06:37.611 lcore 2: 196661 00:06:37.611 lcore 3: 196663 00:06:37.611 done. 00:06:37.611 00:06:37.611 real 0m1.288s 00:06:37.611 user 0m4.102s 00:06:37.611 sys 0m0.066s 00:06:37.611 22:08:09 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.611 ************************************ 00:06:37.611 END TEST event_perf 00:06:37.611 ************************************ 00:06:37.611 22:08:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.611 22:08:09 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:37.611 22:08:09 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:37.611 22:08:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.611 22:08:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.611 ************************************ 00:06:37.611 START TEST event_reactor 00:06:37.611 ************************************ 00:06:37.611 22:08:09 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:37.611 [2024-07-23 22:08:09.788569] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:37.611 [2024-07-23 22:08:09.788666] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73495 ] 00:06:37.869 [2024-07-23 22:08:09.913836] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:37.869 [2024-07-23 22:08:09.933101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.869 [2024-07-23 22:08:09.981781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.242 test_start 00:06:39.242 oneshot 00:06:39.242 tick 100 00:06:39.242 tick 100 00:06:39.242 tick 250 00:06:39.242 tick 100 00:06:39.242 tick 100 00:06:39.242 tick 250 00:06:39.242 tick 100 00:06:39.242 tick 500 00:06:39.242 tick 100 00:06:39.242 tick 100 00:06:39.242 tick 250 00:06:39.242 tick 100 00:06:39.242 tick 100 00:06:39.242 test_end 00:06:39.242 00:06:39.242 real 0m1.282s 00:06:39.242 user 0m1.121s 00:06:39.242 sys 0m0.055s 00:06:39.242 ************************************ 00:06:39.242 END TEST event_reactor 00:06:39.242 ************************************ 00:06:39.242 22:08:11 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.242 22:08:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:39.242 22:08:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:39.242 22:08:11 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:39.242 22:08:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.242 22:08:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.242 ************************************ 00:06:39.242 START TEST event_reactor_perf 00:06:39.242 ************************************ 00:06:39.242 22:08:11 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:39.242 [2024-07-23 22:08:11.129030] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:39.242 [2024-07-23 22:08:11.129131] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73525 ] 00:06:39.242 [2024-07-23 22:08:11.254729] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.242 [2024-07-23 22:08:11.272678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.242 [2024-07-23 22:08:11.321657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.616 test_start 00:06:40.616 test_end 00:06:40.616 Performance: 515694 events per second 00:06:40.616 ************************************ 00:06:40.616 END TEST event_reactor_perf 00:06:40.616 ************************************ 00:06:40.616 00:06:40.616 real 0m1.279s 00:06:40.616 user 0m1.115s 00:06:40.616 sys 0m0.057s 00:06:40.616 22:08:12 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.616 22:08:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.616 22:08:12 event -- event/event.sh@49 -- # uname -s 00:06:40.616 22:08:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:40.616 22:08:12 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:40.616 22:08:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.616 22:08:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.616 22:08:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.616 ************************************ 00:06:40.616 START TEST event_scheduler 00:06:40.616 ************************************ 00:06:40.616 22:08:12 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:40.616 * Looking for test storage... 00:06:40.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:40.616 22:08:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:40.616 22:08:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=73591 00:06:40.616 22:08:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:40.616 22:08:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 73591 00:06:40.616 22:08:12 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 73591 ']' 00:06:40.616 22:08:12 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.616 22:08:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:40.616 22:08:12 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.616 22:08:12 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.616 22:08:12 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.616 22:08:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:40.616 [2024-07-23 22:08:12.602971] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:40.616 [2024-07-23 22:08:12.603083] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73591 ] 00:06:40.616 [2024-07-23 22:08:12.734900] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.616 [2024-07-23 22:08:12.751791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.874 [2024-07-23 22:08:12.812798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.874 [2024-07-23 22:08:12.812966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.874 [2024-07-23 22:08:12.814467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.874 [2024-07-23 22:08:12.814477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.439 22:08:13 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.439 22:08:13 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:41.439 22:08:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:41.439 22:08:13 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.439 22:08:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.439 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:41.439 POWER: Cannot set governor of lcore 0 to userspace 00:06:41.439 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:41.439 POWER: Cannot set governor of lcore 0 to performance 00:06:41.439 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:41.439 POWER: Cannot set governor of lcore 0 to userspace 00:06:41.439 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:41.439 POWER: Cannot set governor of lcore 0 to userspace 00:06:41.439 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:41.439 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:41.439 POWER: Unable to set Power Management Environment for lcore 0 00:06:41.439 [2024-07-23 22:08:13.543759] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:41.439 [2024-07-23 22:08:13.543772] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:41.439 [2024-07-23 22:08:13.543780] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:41.439 [2024-07-23 22:08:13.543791] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:41.439 [2024-07-23 22:08:13.543798] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:41.439 [2024-07-23 22:08:13.543805] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:41.439 22:08:13 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.439 22:08:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:41.439 22:08:13 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.439 22:08:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.439 [2024-07-23 22:08:13.592382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.439 [2024-07-23 22:08:13.613426] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:41.439 22:08:13 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.440 22:08:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:41.440 22:08:13 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.440 22:08:13 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.440 22:08:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.440 ************************************ 00:06:41.440 START TEST scheduler_create_thread 00:06:41.440 ************************************ 00:06:41.440 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:41.440 22:08:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:41.440 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.440 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.697 2 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.698 3 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.698 4 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.698 5 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.698 6 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.698 7 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.698 8 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.698 9 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.698 10 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.698 22:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.072 22:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.072 22:08:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:43.072 22:08:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:43.072 22:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.072 22:08:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.446 22:08:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.446 00:06:44.446 real 0m2.613s 00:06:44.446 user 0m0.023s 00:06:44.446 sys 0m0.008s 00:06:44.446 22:08:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.446 22:08:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.446 ************************************ 00:06:44.446 END TEST scheduler_create_thread 00:06:44.446 ************************************ 00:06:44.446 22:08:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:44.446 22:08:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 73591 00:06:44.446 22:08:16 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 73591 ']' 00:06:44.446 22:08:16 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 73591 00:06:44.446 22:08:16 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:44.446 22:08:16 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.446 22:08:16 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73591 00:06:44.446 22:08:16 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:44.446 killing process with pid 73591 00:06:44.446 22:08:16 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:44.446 22:08:16 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73591' 00:06:44.446 22:08:16 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 73591 00:06:44.446 22:08:16 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 73591 00:06:44.704 [2024-07-23 22:08:16.716598] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:45.010 00:06:45.010 real 0m4.458s 00:06:45.010 user 0m8.443s 00:06:45.010 sys 0m0.368s 00:06:45.010 22:08:16 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.010 22:08:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.010 ************************************ 00:06:45.010 END TEST event_scheduler 00:06:45.010 ************************************ 00:06:45.010 22:08:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:45.010 22:08:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:45.010 22:08:16 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.010 22:08:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.010 22:08:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.010 ************************************ 00:06:45.010 START TEST app_repeat 00:06:45.010 ************************************ 00:06:45.010 22:08:16 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:45.010 22:08:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.010 22:08:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.010 22:08:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:45.010 22:08:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.010 22:08:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:45.010 22:08:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:45.010 22:08:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:45.010 22:08:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=73686 00:06:45.010 22:08:16 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:45.010 22:08:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.010 Process app_repeat pid: 73686 00:06:45.010 22:08:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 73686' 00:06:45.010 22:08:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.010 spdk_app_start Round 0 00:06:45.010 22:08:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:45.010 22:08:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73686 /var/tmp/spdk-nbd.sock 00:06:45.010 22:08:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 73686 ']' 00:06:45.010 22:08:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.010 22:08:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.010 22:08:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.010 22:08:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.010 22:08:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.010 [2024-07-23 22:08:17.015354] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:06:45.010 [2024-07-23 22:08:17.015453] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73686 ] 00:06:45.010 [2024-07-23 22:08:17.141498] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:45.010 [2024-07-23 22:08:17.158091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.290 [2024-07-23 22:08:17.208122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.290 [2024-07-23 22:08:17.208139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.290 [2024-07-23 22:08:17.249585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.856 22:08:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.856 22:08:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:45.856 22:08:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.114 Malloc0 00:06:46.114 22:08:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.114 Malloc1 00:06:46.372 22:08:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.372 /dev/nbd0 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.372 22:08:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:46.372 22:08:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:46.372 22:08:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:46.372 22:08:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:46.372 22:08:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:46.372 22:08:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:46.372 22:08:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:46.372 22:08:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:46.372 22:08:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.372 1+0 records in 00:06:46.372 1+0 records out 00:06:46.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351322 s, 11.7 MB/s 00:06:46.372 22:08:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.372 22:08:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:46.372 22:08:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.372 22:08:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:46.372 22:08:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.372 22:08:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.630 /dev/nbd1 00:06:46.630 22:08:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.630 22:08:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.630 22:08:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:46.630 22:08:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:46.630 22:08:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:46.630 22:08:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:46.630 22:08:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:46.630 22:08:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:46.630 22:08:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:46.630 22:08:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:46.630 22:08:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.630 1+0 records in 00:06:46.630 1+0 records out 00:06:46.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418903 s, 9.8 MB/s 00:06:46.630 22:08:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.630 22:08:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:46.630 22:08:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.630 22:08:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:46.630 22:08:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:46.630 22:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.630 22:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.630 22:08:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.630 22:08:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.630 22:08:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.889 22:08:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.889 { 00:06:46.889 "nbd_device": "/dev/nbd0", 00:06:46.889 "bdev_name": "Malloc0" 00:06:46.889 }, 00:06:46.889 { 00:06:46.889 "nbd_device": "/dev/nbd1", 00:06:46.889 "bdev_name": "Malloc1" 00:06:46.889 } 00:06:46.889 ]' 00:06:46.889 22:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.889 { 00:06:46.889 "nbd_device": "/dev/nbd0", 00:06:46.889 "bdev_name": "Malloc0" 00:06:46.889 }, 00:06:46.889 { 00:06:46.889 "nbd_device": "/dev/nbd1", 00:06:46.889 "bdev_name": "Malloc1" 00:06:46.889 } 00:06:46.889 ]' 00:06:46.889 22:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.889 /dev/nbd1' 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.889 /dev/nbd1' 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.889 256+0 records in 00:06:46.889 256+0 records out 00:06:46.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00518361 s, 202 MB/s 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.889 256+0 records in 00:06:46.889 256+0 records out 00:06:46.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218786 s, 47.9 MB/s 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.889 22:08:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.147 256+0 records in 00:06:47.147 256+0 records out 00:06:47.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272267 s, 38.5 MB/s 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.147 22:08:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.405 22:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.405 22:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.405 22:08:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.405 22:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.405 22:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.405 22:08:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.405 22:08:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.405 22:08:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.405 22:08:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.405 22:08:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.663 22:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.663 22:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.663 22:08:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.663 22:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.663 22:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.663 22:08:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.663 22:08:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.663 22:08:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.663 22:08:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.663 22:08:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.663 22:08:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.663 22:08:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.663 22:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.663 22:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.920 22:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.920 22:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.920 22:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.920 22:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.920 22:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.920 22:08:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.920 22:08:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.920 22:08:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.920 22:08:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.920 22:08:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.178 22:08:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:48.178 [2024-07-23 22:08:20.269280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.178 [2024-07-23 22:08:20.318982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.178 [2024-07-23 22:08:20.320214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.178 [2024-07-23 22:08:20.361485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.178 [2024-07-23 22:08:20.361578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.178 [2024-07-23 22:08:20.361590] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.459 spdk_app_start Round 1 00:06:51.459 22:08:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.459 22:08:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:51.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.459 22:08:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73686 /var/tmp/spdk-nbd.sock 00:06:51.459 22:08:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 73686 ']' 00:06:51.459 22:08:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.459 22:08:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.459 22:08:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.459 22:08:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.459 22:08:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.459 22:08:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.459 22:08:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:51.459 22:08:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.459 Malloc0 00:06:51.459 22:08:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.716 Malloc1 00:06:51.974 22:08:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.974 22:08:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.974 22:08:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.974 22:08:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.974 22:08:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.974 22:08:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.974 22:08:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.975 22:08:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.975 22:08:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.975 22:08:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.975 22:08:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.975 22:08:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.975 22:08:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.975 22:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.975 22:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.975 22:08:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.975 /dev/nbd0 00:06:51.975 22:08:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.975 22:08:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.975 22:08:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:51.975 22:08:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:51.975 22:08:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.975 22:08:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.975 22:08:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:51.975 22:08:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:51.975 22:08:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.975 22:08:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.975 22:08:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.975 1+0 records in 00:06:51.975 1+0 records out 00:06:51.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177952 s, 23.0 MB/s 00:06:51.975 22:08:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.975 22:08:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:51.975 22:08:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.975 22:08:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.975 22:08:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:51.975 22:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.975 22:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.975 22:08:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.233 /dev/nbd1 00:06:52.233 22:08:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.233 22:08:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.233 22:08:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:52.233 22:08:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:52.233 22:08:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:52.233 22:08:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:52.233 22:08:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:52.233 22:08:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:52.233 22:08:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:52.233 22:08:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:52.233 22:08:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.233 1+0 records in 00:06:52.233 1+0 records out 00:06:52.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025382 s, 16.1 MB/s 00:06:52.492 22:08:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.492 22:08:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:52.492 22:08:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.492 22:08:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:52.492 22:08:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.492 { 00:06:52.492 "nbd_device": "/dev/nbd0", 00:06:52.492 "bdev_name": "Malloc0" 00:06:52.492 }, 00:06:52.492 { 00:06:52.492 "nbd_device": "/dev/nbd1", 00:06:52.492 "bdev_name": "Malloc1" 00:06:52.492 } 00:06:52.492 ]' 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.492 { 00:06:52.492 "nbd_device": "/dev/nbd0", 00:06:52.492 "bdev_name": "Malloc0" 00:06:52.492 }, 00:06:52.492 { 00:06:52.492 "nbd_device": "/dev/nbd1", 00:06:52.492 "bdev_name": "Malloc1" 00:06:52.492 } 00:06:52.492 ]' 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.492 /dev/nbd1' 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.492 /dev/nbd1' 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.492 256+0 records in 00:06:52.492 256+0 records out 00:06:52.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00993069 s, 106 MB/s 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.492 22:08:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.751 256+0 records in 00:06:52.751 256+0 records out 00:06:52.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021538 s, 48.7 MB/s 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.751 256+0 records in 00:06:52.751 256+0 records out 00:06:52.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221028 s, 47.4 MB/s 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.751 22:08:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.010 22:08:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.010 22:08:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.010 22:08:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.010 22:08:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.010 22:08:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.010 22:08:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.010 22:08:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.010 22:08:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.010 22:08:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.010 22:08:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.010 22:08:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.010 22:08:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.010 22:08:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.010 22:08:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.010 22:08:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.010 22:08:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.010 22:08:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.010 22:08:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.010 22:08:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.010 22:08:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.010 22:08:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.268 22:08:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.268 22:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.268 22:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.268 22:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.268 22:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.268 22:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.268 22:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.268 22:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.268 22:08:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.268 22:08:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.268 22:08:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.268 22:08:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.268 22:08:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:53.527 22:08:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.785 [2024-07-23 22:08:25.836999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.785 [2024-07-23 22:08:25.886486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.785 [2024-07-23 22:08:25.886492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.785 [2024-07-23 22:08:25.928812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.785 [2024-07-23 22:08:25.928910] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.785 [2024-07-23 22:08:25.928921] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.068 22:08:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.068 spdk_app_start Round 2 00:06:57.068 22:08:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:57.068 22:08:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73686 /var/tmp/spdk-nbd.sock 00:06:57.068 22:08:28 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 73686 ']' 00:06:57.068 22:08:28 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.068 22:08:28 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.068 22:08:28 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.068 22:08:28 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.068 22:08:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.068 22:08:28 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.068 22:08:28 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:57.068 22:08:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.068 Malloc0 00:06:57.068 22:08:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.326 Malloc1 00:06:57.326 22:08:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:57.326 22:08:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.326 22:08:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.326 22:08:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:57.326 22:08:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.326 22:08:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:57.327 22:08:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:57.327 22:08:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.327 22:08:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.327 22:08:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:57.327 22:08:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.327 22:08:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:57.327 22:08:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:57.327 22:08:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:57.327 22:08:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.327 22:08:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:57.585 /dev/nbd0 00:06:57.585 22:08:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:57.585 22:08:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:57.585 22:08:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:57.585 22:08:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:57.585 22:08:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:57.585 22:08:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:57.585 22:08:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:57.585 22:08:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:57.585 22:08:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:57.585 22:08:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:57.585 22:08:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.585 1+0 records in 00:06:57.585 1+0 records out 00:06:57.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213293 s, 19.2 MB/s 00:06:57.585 22:08:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.585 22:08:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:57.585 22:08:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.585 22:08:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:57.585 22:08:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:57.585 22:08:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.585 22:08:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.585 22:08:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:57.843 /dev/nbd1 00:06:57.843 22:08:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:57.843 22:08:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:57.843 22:08:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:57.843 22:08:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:57.843 22:08:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:57.843 22:08:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:57.843 22:08:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:57.843 22:08:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:57.843 22:08:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:57.843 22:08:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:57.843 22:08:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.843 1+0 records in 00:06:57.843 1+0 records out 00:06:57.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228885 s, 17.9 MB/s 00:06:57.843 22:08:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.843 22:08:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:57.843 22:08:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.843 22:08:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:57.843 22:08:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:57.843 22:08:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.843 22:08:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.843 22:08:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.843 22:08:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.843 22:08:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:58.102 { 00:06:58.102 "nbd_device": "/dev/nbd0", 00:06:58.102 "bdev_name": "Malloc0" 00:06:58.102 }, 00:06:58.102 { 00:06:58.102 "nbd_device": "/dev/nbd1", 00:06:58.102 "bdev_name": "Malloc1" 00:06:58.102 } 00:06:58.102 ]' 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:58.102 { 00:06:58.102 "nbd_device": "/dev/nbd0", 00:06:58.102 "bdev_name": "Malloc0" 00:06:58.102 }, 00:06:58.102 { 00:06:58.102 "nbd_device": "/dev/nbd1", 00:06:58.102 "bdev_name": "Malloc1" 00:06:58.102 } 00:06:58.102 ]' 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:58.102 /dev/nbd1' 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:58.102 /dev/nbd1' 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:58.102 256+0 records in 00:06:58.102 256+0 records out 00:06:58.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00816782 s, 128 MB/s 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:58.102 256+0 records in 00:06:58.102 256+0 records out 00:06:58.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190471 s, 55.1 MB/s 00:06:58.102 22:08:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:58.103 256+0 records in 00:06:58.103 256+0 records out 00:06:58.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227706 s, 46.0 MB/s 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.103 22:08:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:58.361 22:08:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:58.361 22:08:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:58.361 22:08:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:58.361 22:08:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.361 22:08:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.361 22:08:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:58.361 22:08:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.361 22:08:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.361 22:08:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.361 22:08:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:58.621 22:08:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:58.621 22:08:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:58.621 22:08:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:58.621 22:08:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.621 22:08:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.621 22:08:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:58.621 22:08:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.621 22:08:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.621 22:08:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.621 22:08:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.621 22:08:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.889 22:08:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.889 22:08:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.889 22:08:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.889 22:08:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.889 22:08:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.889 22:08:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.889 22:08:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:58.889 22:08:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.889 22:08:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.889 22:08:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:58.889 22:08:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:58.889 22:08:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:58.889 22:08:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:59.147 22:08:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:59.405 [2024-07-23 22:08:31.412343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.405 [2024-07-23 22:08:31.461682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.405 [2024-07-23 22:08:31.461690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.405 [2024-07-23 22:08:31.503121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.405 [2024-07-23 22:08:31.503200] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:59.405 [2024-07-23 22:08:31.503227] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:02.686 22:08:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 73686 /var/tmp/spdk-nbd.sock 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 73686 ']' 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:02.686 22:08:34 event.app_repeat -- event/event.sh@39 -- # killprocess 73686 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 73686 ']' 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 73686 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73686 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.686 killing process with pid 73686 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73686' 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@967 -- # kill 73686 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@972 -- # wait 73686 00:07:02.686 spdk_app_start is called in Round 0. 00:07:02.686 Shutdown signal received, stop current app iteration 00:07:02.686 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 reinitialization... 00:07:02.686 spdk_app_start is called in Round 1. 00:07:02.686 Shutdown signal received, stop current app iteration 00:07:02.686 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 reinitialization... 00:07:02.686 spdk_app_start is called in Round 2. 00:07:02.686 Shutdown signal received, stop current app iteration 00:07:02.686 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 reinitialization... 00:07:02.686 spdk_app_start is called in Round 3. 00:07:02.686 Shutdown signal received, stop current app iteration 00:07:02.686 22:08:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:02.686 22:08:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:02.686 00:07:02.686 real 0m17.706s 00:07:02.686 user 0m38.889s 00:07:02.686 sys 0m3.063s 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.686 22:08:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.686 ************************************ 00:07:02.686 END TEST app_repeat 00:07:02.686 ************************************ 00:07:02.686 22:08:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:02.686 22:08:34 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:02.686 22:08:34 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.686 22:08:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.686 22:08:34 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.686 ************************************ 00:07:02.686 START TEST cpu_locks 00:07:02.686 ************************************ 00:07:02.686 22:08:34 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:02.686 * Looking for test storage... 00:07:02.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:02.686 22:08:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:02.686 22:08:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:02.686 22:08:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:02.686 22:08:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:02.686 22:08:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.686 22:08:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.686 22:08:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.686 ************************************ 00:07:02.686 START TEST default_locks 00:07:02.686 ************************************ 00:07:02.686 22:08:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:02.686 22:08:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=74109 00:07:02.686 22:08:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 74109 00:07:02.686 22:08:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.686 22:08:34 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 74109 ']' 00:07:02.686 22:08:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.686 22:08:34 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.686 22:08:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.686 22:08:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.686 22:08:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.944 [2024-07-23 22:08:34.927221] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:02.944 [2024-07-23 22:08:34.927331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74109 ] 00:07:02.944 [2024-07-23 22:08:35.053720] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:02.944 [2024-07-23 22:08:35.073989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.944 [2024-07-23 22:08:35.122949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.201 [2024-07-23 22:08:35.164540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.767 22:08:35 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.767 22:08:35 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:03.767 22:08:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 74109 00:07:03.767 22:08:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 74109 00:07:03.767 22:08:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.332 22:08:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 74109 00:07:04.332 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 74109 ']' 00:07:04.332 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 74109 00:07:04.332 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:04.332 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:04.332 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74109 00:07:04.332 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:04.332 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:04.332 killing process with pid 74109 00:07:04.332 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74109' 00:07:04.332 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 74109 00:07:04.332 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 74109 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 74109 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 74109 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 74109 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 74109 ']' 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.590 ERROR: process (pid: 74109) is no longer running 00:07:04.590 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (74109) - No such process 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:04.590 00:07:04.590 real 0m1.901s 00:07:04.590 user 0m2.067s 00:07:04.590 sys 0m0.603s 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.590 ************************************ 00:07:04.590 END TEST default_locks 00:07:04.590 ************************************ 00:07:04.590 22:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.849 22:08:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:04.849 22:08:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.849 22:08:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.849 22:08:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.849 ************************************ 00:07:04.849 START TEST default_locks_via_rpc 00:07:04.849 ************************************ 00:07:04.849 22:08:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:04.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.849 22:08:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=74157 00:07:04.849 22:08:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 74157 00:07:04.849 22:08:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 74157 ']' 00:07:04.849 22:08:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.849 22:08:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.849 22:08:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.849 22:08:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.849 22:08:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.849 22:08:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.849 [2024-07-23 22:08:36.888999] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:04.849 [2024-07-23 22:08:36.889362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74157 ] 00:07:04.849 [2024-07-23 22:08:37.016219] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.849 [2024-07-23 22:08:37.033357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.107 [2024-07-23 22:08:37.082574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.107 [2024-07-23 22:08:37.124269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.672 22:08:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 74157 00:07:05.929 22:08:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 74157 00:07:05.929 22:08:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.187 22:08:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 74157 00:07:06.187 22:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 74157 ']' 00:07:06.187 22:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 74157 00:07:06.187 22:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:06.187 22:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.187 22:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74157 00:07:06.187 killing process with pid 74157 00:07:06.187 22:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.188 22:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.188 22:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74157' 00:07:06.188 22:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 74157 00:07:06.188 22:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 74157 00:07:06.446 ************************************ 00:07:06.446 END TEST default_locks_via_rpc 00:07:06.446 ************************************ 00:07:06.446 00:07:06.446 real 0m1.783s 00:07:06.446 user 0m1.933s 00:07:06.446 sys 0m0.536s 00:07:06.446 22:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.446 22:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.704 22:08:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:06.704 22:08:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.704 22:08:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.704 22:08:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.704 ************************************ 00:07:06.704 START TEST non_locking_app_on_locked_coremask 00:07:06.704 ************************************ 00:07:06.704 22:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:06.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.704 22:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=74203 00:07:06.704 22:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 74203 /var/tmp/spdk.sock 00:07:06.704 22:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.704 22:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 74203 ']' 00:07:06.704 22:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.704 22:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.704 22:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.704 22:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.704 22:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.704 [2024-07-23 22:08:38.736219] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:06.704 [2024-07-23 22:08:38.736319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74203 ] 00:07:06.704 [2024-07-23 22:08:38.863018] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.704 [2024-07-23 22:08:38.880979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.963 [2024-07-23 22:08:38.930228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.963 [2024-07-23 22:08:38.971984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.529 22:08:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.529 22:08:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:07.529 22:08:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:07.529 22:08:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=74219 00:07:07.529 22:08:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 74219 /var/tmp/spdk2.sock 00:07:07.529 22:08:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 74219 ']' 00:07:07.529 22:08:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.529 22:08:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.529 22:08:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.529 22:08:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.529 22:08:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.529 [2024-07-23 22:08:39.648949] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:07.529 [2024-07-23 22:08:39.649271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74219 ] 00:07:07.788 [2024-07-23 22:08:39.768504] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:07.788 [2024-07-23 22:08:39.785524] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.788 [2024-07-23 22:08:39.785593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.788 [2024-07-23 22:08:39.882826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.788 [2024-07-23 22:08:39.968093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.722 22:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.722 22:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:08.722 22:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 74203 00:07:08.722 22:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 74203 00:07:08.722 22:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.698 22:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 74203 00:07:09.698 22:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 74203 ']' 00:07:09.698 22:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 74203 00:07:09.698 22:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:09.698 22:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.698 22:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74203 00:07:09.698 killing process with pid 74203 00:07:09.698 22:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:09.698 22:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:09.698 22:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74203' 00:07:09.698 22:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 74203 00:07:09.698 22:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 74203 00:07:10.265 22:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 74219 00:07:10.265 22:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 74219 ']' 00:07:10.265 22:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 74219 00:07:10.265 22:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:10.265 22:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.266 22:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74219 00:07:10.266 killing process with pid 74219 00:07:10.266 22:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.266 22:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.266 22:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74219' 00:07:10.266 22:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 74219 00:07:10.266 22:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 74219 00:07:10.524 00:07:10.524 real 0m3.907s 00:07:10.524 user 0m4.317s 00:07:10.524 sys 0m1.142s 00:07:10.524 22:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.524 22:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.524 ************************************ 00:07:10.524 END TEST non_locking_app_on_locked_coremask 00:07:10.524 ************************************ 00:07:10.524 22:08:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:10.524 22:08:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.524 22:08:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.524 22:08:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.524 ************************************ 00:07:10.524 START TEST locking_app_on_unlocked_coremask 00:07:10.524 ************************************ 00:07:10.524 22:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:10.524 22:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=74286 00:07:10.524 22:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 74286 /var/tmp/spdk.sock 00:07:10.524 22:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:10.524 22:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 74286 ']' 00:07:10.524 22:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.524 22:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.524 22:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.524 22:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.524 22:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.524 [2024-07-23 22:08:42.714673] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:10.524 [2024-07-23 22:08:42.714780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74286 ] 00:07:10.783 [2024-07-23 22:08:42.841208] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:10.783 [2024-07-23 22:08:42.860442] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.783 [2024-07-23 22:08:42.860497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.783 [2024-07-23 22:08:42.909592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.783 [2024-07-23 22:08:42.951027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.717 22:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.717 22:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:11.717 22:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:11.717 22:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=74302 00:07:11.717 22:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 74302 /var/tmp/spdk2.sock 00:07:11.717 22:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 74302 ']' 00:07:11.717 22:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.717 22:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.717 22:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.717 22:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.717 22:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.717 [2024-07-23 22:08:43.687794] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:11.717 [2024-07-23 22:08:43.688151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74302 ] 00:07:11.717 [2024-07-23 22:08:43.809467] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:11.717 [2024-07-23 22:08:43.826620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.975 [2024-07-23 22:08:43.919766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.975 [2024-07-23 22:08:44.005132] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.542 22:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.542 22:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:12.542 22:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 74302 00:07:12.542 22:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 74302 00:07:12.542 22:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.488 22:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 74286 00:07:13.488 22:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 74286 ']' 00:07:13.488 22:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 74286 00:07:13.488 22:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:13.488 22:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.488 22:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74286 00:07:13.488 killing process with pid 74286 00:07:13.488 22:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.488 22:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.488 22:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74286' 00:07:13.488 22:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 74286 00:07:13.488 22:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 74286 00:07:14.422 22:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 74302 00:07:14.422 22:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 74302 ']' 00:07:14.422 22:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 74302 00:07:14.422 22:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:14.422 22:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.422 22:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74302 00:07:14.422 22:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.422 22:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.422 killing process with pid 74302 00:07:14.422 22:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74302' 00:07:14.422 22:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 74302 00:07:14.422 22:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 74302 00:07:14.422 00:07:14.422 real 0m3.956s 00:07:14.422 user 0m4.403s 00:07:14.422 sys 0m1.170s 00:07:14.422 22:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.422 22:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.422 ************************************ 00:07:14.422 END TEST locking_app_on_unlocked_coremask 00:07:14.422 ************************************ 00:07:14.680 22:08:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:14.680 22:08:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.680 22:08:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.680 22:08:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.680 ************************************ 00:07:14.680 START TEST locking_app_on_locked_coremask 00:07:14.680 ************************************ 00:07:14.680 22:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:14.680 22:08:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=74371 00:07:14.680 22:08:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 74371 /var/tmp/spdk.sock 00:07:14.680 22:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 74371 ']' 00:07:14.680 22:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.680 22:08:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.680 22:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.680 22:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.680 22:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.680 22:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.680 [2024-07-23 22:08:46.732430] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:14.680 [2024-07-23 22:08:46.732534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74371 ] 00:07:14.680 [2024-07-23 22:08:46.858560] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.680 [2024-07-23 22:08:46.872698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.938 [2024-07-23 22:08:46.921747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.938 [2024-07-23 22:08:46.963101] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.504 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.504 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:15.504 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=74387 00:07:15.504 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:15.504 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 74387 /var/tmp/spdk2.sock 00:07:15.504 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:15.504 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 74387 /var/tmp/spdk2.sock 00:07:15.504 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:15.505 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.505 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:15.505 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.505 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 74387 /var/tmp/spdk2.sock 00:07:15.505 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 74387 ']' 00:07:15.505 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.505 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.505 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.505 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.505 22:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.505 [2024-07-23 22:08:47.663322] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:15.505 [2024-07-23 22:08:47.663437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74387 ] 00:07:15.763 [2024-07-23 22:08:47.792500] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:15.763 [2024-07-23 22:08:47.805581] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 74371 has claimed it. 00:07:15.763 [2024-07-23 22:08:47.805643] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:16.330 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (74387) - No such process 00:07:16.330 ERROR: process (pid: 74387) is no longer running 00:07:16.330 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.330 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:16.330 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:16.330 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:16.330 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:16.330 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:16.330 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 74371 00:07:16.330 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 74371 00:07:16.330 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.896 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 74371 00:07:16.896 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 74371 ']' 00:07:16.896 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 74371 00:07:16.896 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:16.896 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.896 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74371 00:07:16.896 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:16.896 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:16.896 killing process with pid 74371 00:07:16.896 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74371' 00:07:16.896 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 74371 00:07:16.896 22:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 74371 00:07:17.155 00:07:17.155 real 0m2.515s 00:07:17.155 user 0m2.865s 00:07:17.155 sys 0m0.659s 00:07:17.155 22:08:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.155 22:08:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.155 ************************************ 00:07:17.155 END TEST locking_app_on_locked_coremask 00:07:17.155 ************************************ 00:07:17.155 22:08:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:17.155 22:08:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.155 22:08:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.155 22:08:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.155 ************************************ 00:07:17.155 START TEST locking_overlapped_coremask 00:07:17.155 ************************************ 00:07:17.155 22:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:17.155 22:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=74433 00:07:17.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.155 22:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 74433 /var/tmp/spdk.sock 00:07:17.155 22:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 74433 ']' 00:07:17.155 22:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:17.155 22:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.155 22:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.155 22:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.155 22:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.155 22:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.155 [2024-07-23 22:08:49.310385] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:17.155 [2024-07-23 22:08:49.310489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74433 ] 00:07:17.413 [2024-07-23 22:08:49.437526] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.413 [2024-07-23 22:08:49.450296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.413 [2024-07-23 22:08:49.501502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.413 [2024-07-23 22:08:49.501645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.413 [2024-07-23 22:08:49.501646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.413 [2024-07-23 22:08:49.544457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.346 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.346 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:18.346 22:08:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=74445 00:07:18.346 22:08:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:18.346 22:08:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 74445 /var/tmp/spdk2.sock 00:07:18.346 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:18.346 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 74445 /var/tmp/spdk2.sock 00:07:18.346 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:18.346 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.346 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:18.346 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.346 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 74445 /var/tmp/spdk2.sock 00:07:18.346 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 74445 ']' 00:07:18.346 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.347 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.347 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.347 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.347 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.347 [2024-07-23 22:08:50.239361] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:18.347 [2024-07-23 22:08:50.239754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74445 ] 00:07:18.347 [2024-07-23 22:08:50.368939] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:18.347 [2024-07-23 22:08:50.385559] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74433 has claimed it. 00:07:18.347 [2024-07-23 22:08:50.385645] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:18.945 ERROR: process (pid: 74445) is no longer running 00:07:18.946 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (74445) - No such process 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 74433 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 74433 ']' 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 74433 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74433 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74433' 00:07:18.946 killing process with pid 74433 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 74433 00:07:18.946 22:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 74433 00:07:19.204 00:07:19.205 real 0m2.036s 00:07:19.205 user 0m5.664s 00:07:19.205 sys 0m0.415s 00:07:19.205 22:08:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.205 22:08:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.205 ************************************ 00:07:19.205 END TEST locking_overlapped_coremask 00:07:19.205 ************************************ 00:07:19.205 22:08:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:19.205 22:08:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.205 22:08:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.205 22:08:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.205 ************************************ 00:07:19.205 START TEST locking_overlapped_coremask_via_rpc 00:07:19.205 ************************************ 00:07:19.205 22:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:19.205 22:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=74492 00:07:19.205 22:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 74492 /var/tmp/spdk.sock 00:07:19.205 22:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 74492 ']' 00:07:19.205 22:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.205 22:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:19.205 22:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.205 22:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.205 22:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.205 22:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.461 [2024-07-23 22:08:51.412831] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:19.461 [2024-07-23 22:08:51.412951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74492 ] 00:07:19.461 [2024-07-23 22:08:51.539699] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:19.461 [2024-07-23 22:08:51.550352] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:19.461 [2024-07-23 22:08:51.550406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.461 [2024-07-23 22:08:51.601528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.461 [2024-07-23 22:08:51.601700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.461 [2024-07-23 22:08:51.601701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.461 [2024-07-23 22:08:51.644203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.392 22:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.392 22:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:20.392 22:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:20.392 22:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=74505 00:07:20.392 22:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 74505 /var/tmp/spdk2.sock 00:07:20.392 22:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 74505 ']' 00:07:20.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.392 22:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.392 22:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.392 22:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.392 22:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.392 22:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.392 [2024-07-23 22:08:52.365888] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:20.392 [2024-07-23 22:08:52.365991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74505 ] 00:07:20.392 [2024-07-23 22:08:52.496579] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.392 [2024-07-23 22:08:52.511732] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.392 [2024-07-23 22:08:52.511788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.649 [2024-07-23 22:08:52.614982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.649 [2024-07-23 22:08:52.618034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.649 [2024-07-23 22:08:52.618038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:20.649 [2024-07-23 22:08:52.698463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.212 [2024-07-23 22:08:53.278993] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74492 has claimed it. 00:07:21.212 request: 00:07:21.212 { 00:07:21.212 "method": "framework_enable_cpumask_locks", 00:07:21.212 "req_id": 1 00:07:21.212 } 00:07:21.212 Got JSON-RPC error response 00:07:21.212 response: 00:07:21.212 { 00:07:21.212 "code": -32603, 00:07:21.212 "message": "Failed to claim CPU core: 2" 00:07:21.212 } 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 74492 /var/tmp/spdk.sock 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 74492 ']' 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.212 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.469 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.469 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:21.469 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 74505 /var/tmp/spdk2.sock 00:07:21.469 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 74505 ']' 00:07:21.469 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.469 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.469 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.469 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.469 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.726 ************************************ 00:07:21.726 END TEST locking_overlapped_coremask_via_rpc 00:07:21.726 ************************************ 00:07:21.726 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.726 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:21.726 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:21.726 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:21.726 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:21.726 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:21.726 00:07:21.726 real 0m2.461s 00:07:21.726 user 0m1.146s 00:07:21.726 sys 0m0.228s 00:07:21.726 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.726 22:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.726 22:08:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:21.726 22:08:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 74492 ]] 00:07:21.726 22:08:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 74492 00:07:21.726 22:08:53 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 74492 ']' 00:07:21.726 22:08:53 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 74492 00:07:21.726 22:08:53 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:21.726 22:08:53 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.726 22:08:53 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74492 00:07:21.726 killing process with pid 74492 00:07:21.726 22:08:53 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.726 22:08:53 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.726 22:08:53 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74492' 00:07:21.726 22:08:53 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 74492 00:07:21.726 22:08:53 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 74492 00:07:22.291 22:08:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 74505 ]] 00:07:22.291 22:08:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 74505 00:07:22.291 22:08:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 74505 ']' 00:07:22.291 22:08:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 74505 00:07:22.291 22:08:54 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:22.291 22:08:54 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.291 22:08:54 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74505 00:07:22.291 killing process with pid 74505 00:07:22.291 22:08:54 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:22.291 22:08:54 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:22.291 22:08:54 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74505' 00:07:22.291 22:08:54 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 74505 00:07:22.291 22:08:54 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 74505 00:07:22.549 22:08:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.549 Process with pid 74492 is not found 00:07:22.549 22:08:54 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:22.549 22:08:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 74492 ]] 00:07:22.549 22:08:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 74492 00:07:22.549 22:08:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 74492 ']' 00:07:22.549 22:08:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 74492 00:07:22.549 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (74492) - No such process 00:07:22.549 22:08:54 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 74492 is not found' 00:07:22.549 22:08:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 74505 ]] 00:07:22.549 22:08:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 74505 00:07:22.549 22:08:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 74505 ']' 00:07:22.549 22:08:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 74505 00:07:22.549 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (74505) - No such process 00:07:22.549 Process with pid 74505 is not found 00:07:22.549 22:08:54 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 74505 is not found' 00:07:22.549 22:08:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.549 00:07:22.549 real 0m19.801s 00:07:22.549 user 0m34.019s 00:07:22.549 sys 0m5.602s 00:07:22.549 22:08:54 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.549 22:08:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.549 ************************************ 00:07:22.549 END TEST cpu_locks 00:07:22.549 ************************************ 00:07:22.549 ************************************ 00:07:22.549 END TEST event 00:07:22.549 ************************************ 00:07:22.549 00:07:22.549 real 0m46.291s 00:07:22.549 user 1m27.840s 00:07:22.549 sys 0m9.531s 00:07:22.549 22:08:54 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.549 22:08:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.549 22:08:54 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:22.549 22:08:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.549 22:08:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.549 22:08:54 -- common/autotest_common.sh@10 -- # set +x 00:07:22.549 ************************************ 00:07:22.549 START TEST thread 00:07:22.549 ************************************ 00:07:22.549 22:08:54 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:22.549 * Looking for test storage... 00:07:22.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:22.807 22:08:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:22.807 22:08:54 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:22.807 22:08:54 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.807 22:08:54 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.807 ************************************ 00:07:22.807 START TEST thread_poller_perf 00:07:22.807 ************************************ 00:07:22.807 22:08:54 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:22.807 [2024-07-23 22:08:54.777184] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:22.807 [2024-07-23 22:08:54.777289] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74631 ] 00:07:22.807 [2024-07-23 22:08:54.902556] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:22.807 [2024-07-23 22:08:54.920685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.807 [2024-07-23 22:08:54.970443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.807 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:24.179 ====================================== 00:07:24.179 busy:2109550526 (cyc) 00:07:24.179 total_run_count: 419000 00:07:24.179 tsc_hz: 2100000000 (cyc) 00:07:24.179 ====================================== 00:07:24.179 poller_cost: 5034 (cyc), 2397 (nsec) 00:07:24.179 00:07:24.179 real 0m1.287s 00:07:24.179 user 0m1.121s 00:07:24.179 sys 0m0.058s 00:07:24.179 22:08:56 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.179 22:08:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:24.179 ************************************ 00:07:24.179 END TEST thread_poller_perf 00:07:24.179 ************************************ 00:07:24.179 22:08:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.179 22:08:56 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:24.179 22:08:56 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.179 22:08:56 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.179 ************************************ 00:07:24.179 START TEST thread_poller_perf 00:07:24.179 ************************************ 00:07:24.179 22:08:56 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.179 [2024-07-23 22:08:56.131567] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:24.179 [2024-07-23 22:08:56.131670] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74661 ] 00:07:24.179 [2024-07-23 22:08:56.257279] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.179 [2024-07-23 22:08:56.273887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.179 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:24.179 [2024-07-23 22:08:56.323670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.557 ====================================== 00:07:25.557 busy:2101716874 (cyc) 00:07:25.557 total_run_count: 5509000 00:07:25.557 tsc_hz: 2100000000 (cyc) 00:07:25.557 ====================================== 00:07:25.557 poller_cost: 381 (cyc), 181 (nsec) 00:07:25.557 00:07:25.557 real 0m1.280s 00:07:25.557 user 0m1.113s 00:07:25.557 sys 0m0.061s 00:07:25.557 22:08:57 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.557 ************************************ 00:07:25.557 END TEST thread_poller_perf 00:07:25.557 ************************************ 00:07:25.557 22:08:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.557 22:08:57 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:25.557 00:07:25.557 real 0m2.796s 00:07:25.557 user 0m2.308s 00:07:25.557 sys 0m0.272s 00:07:25.557 ************************************ 00:07:25.557 END TEST thread 00:07:25.557 ************************************ 00:07:25.557 22:08:57 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.557 22:08:57 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.557 22:08:57 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:25.557 22:08:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.558 22:08:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.558 22:08:57 -- common/autotest_common.sh@10 -- # set +x 00:07:25.558 ************************************ 00:07:25.558 START TEST accel 00:07:25.558 ************************************ 00:07:25.558 22:08:57 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:25.558 * Looking for test storage... 00:07:25.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:25.558 22:08:57 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:25.558 22:08:57 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:25.558 22:08:57 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:25.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.558 22:08:57 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=74736 00:07:25.558 22:08:57 accel -- accel/accel.sh@63 -- # waitforlisten 74736 00:07:25.558 22:08:57 accel -- common/autotest_common.sh@829 -- # '[' -z 74736 ']' 00:07:25.558 22:08:57 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.558 22:08:57 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.558 22:08:57 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.558 22:08:57 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.558 22:08:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.558 22:08:57 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:25.558 22:08:57 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:25.558 22:08:57 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.558 22:08:57 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.558 22:08:57 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.558 22:08:57 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.558 22:08:57 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.558 22:08:57 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:25.558 22:08:57 accel -- accel/accel.sh@41 -- # jq -r . 00:07:25.558 [2024-07-23 22:08:57.668390] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:25.558 [2024-07-23 22:08:57.668483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74736 ] 00:07:25.817 [2024-07-23 22:08:57.794868] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.817 [2024-07-23 22:08:57.813537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.817 [2024-07-23 22:08:57.864115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.817 [2024-07-23 22:08:57.907293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.755 22:08:58 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.755 22:08:58 accel -- common/autotest_common.sh@862 -- # return 0 00:07:26.755 22:08:58 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:26.755 22:08:58 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:26.755 22:08:58 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:26.755 22:08:58 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:26.755 22:08:58 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:26.755 22:08:58 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:26.755 22:08:58 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.756 22:08:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.756 22:08:58 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:26.756 22:08:58 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.756 22:08:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.756 22:08:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.756 22:08:58 accel -- accel/accel.sh@75 -- # killprocess 74736 00:07:26.756 22:08:58 accel -- common/autotest_common.sh@948 -- # '[' -z 74736 ']' 00:07:26.756 22:08:58 accel -- common/autotest_common.sh@952 -- # kill -0 74736 00:07:26.756 22:08:58 accel -- common/autotest_common.sh@953 -- # uname 00:07:26.756 22:08:58 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.756 22:08:58 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74736 00:07:26.756 killing process with pid 74736 00:07:26.756 22:08:58 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.756 22:08:58 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.756 22:08:58 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74736' 00:07:26.756 22:08:58 accel -- common/autotest_common.sh@967 -- # kill 74736 00:07:26.756 22:08:58 accel -- common/autotest_common.sh@972 -- # wait 74736 00:07:27.015 22:08:59 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:27.015 22:08:59 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:27.015 22:08:59 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:27.015 22:08:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.015 22:08:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.015 22:08:59 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:27.015 22:08:59 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:27.015 22:08:59 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:27.015 22:08:59 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.015 22:08:59 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.015 22:08:59 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.015 22:08:59 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.015 22:08:59 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.015 22:08:59 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:27.016 22:08:59 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:27.016 22:08:59 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.016 22:08:59 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 22:08:59 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:27.016 22:08:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:27.016 22:08:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.016 22:08:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 ************************************ 00:07:27.016 START TEST accel_missing_filename 00:07:27.016 ************************************ 00:07:27.016 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:27.016 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:27.016 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:27.016 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:27.016 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.016 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:27.016 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.016 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:27.016 22:08:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:27.016 22:08:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:27.016 22:08:59 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.016 22:08:59 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.016 22:08:59 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.016 22:08:59 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.016 22:08:59 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.016 22:08:59 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:27.016 22:08:59 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:27.016 [2024-07-23 22:08:59.161832] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:27.016 [2024-07-23 22:08:59.162183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74787 ] 00:07:27.275 [2024-07-23 22:08:59.279802] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.275 [2024-07-23 22:08:59.297143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.275 [2024-07-23 22:08:59.347808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.275 [2024-07-23 22:08:59.392465] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.275 [2024-07-23 22:08:59.455220] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:27.533 A filename is required. 00:07:27.533 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:27.533 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:27.533 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:27.533 ************************************ 00:07:27.533 END TEST accel_missing_filename 00:07:27.533 ************************************ 00:07:27.533 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:27.533 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:27.533 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:27.533 00:07:27.533 real 0m0.390s 00:07:27.533 user 0m0.217s 00:07:27.533 sys 0m0.110s 00:07:27.533 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.533 22:08:59 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:27.533 22:08:59 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:27.533 22:08:59 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:27.533 22:08:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.533 22:08:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.533 ************************************ 00:07:27.533 START TEST accel_compress_verify 00:07:27.533 ************************************ 00:07:27.533 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:27.533 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:27.534 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:27.534 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:27.534 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.534 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:27.534 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.534 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:27.534 22:08:59 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:27.534 22:08:59 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:27.534 22:08:59 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.534 22:08:59 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.534 22:08:59 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.534 22:08:59 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.534 22:08:59 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.534 22:08:59 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:27.534 22:08:59 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:27.534 [2024-07-23 22:08:59.615520] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:27.534 [2024-07-23 22:08:59.615622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74812 ] 00:07:27.792 [2024-07-23 22:08:59.741213] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.792 [2024-07-23 22:08:59.761515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.792 [2024-07-23 22:08:59.812524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.792 [2024-07-23 22:08:59.856000] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.792 [2024-07-23 22:08:59.917414] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:27.792 00:07:27.792 Compression does not support the verify option, aborting. 00:07:28.052 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:28.052 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.052 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:28.052 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:28.052 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:28.052 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.052 00:07:28.052 real 0m0.404s 00:07:28.052 user 0m0.229s 00:07:28.052 sys 0m0.112s 00:07:28.052 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.052 22:08:59 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:28.052 ************************************ 00:07:28.052 END TEST accel_compress_verify 00:07:28.052 ************************************ 00:07:28.052 22:09:00 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:28.052 22:09:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:28.052 22:09:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.052 22:09:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.052 ************************************ 00:07:28.052 START TEST accel_wrong_workload 00:07:28.052 ************************************ 00:07:28.052 22:09:00 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:28.052 22:09:00 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:28.052 22:09:00 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:28.052 22:09:00 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:28.052 22:09:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.052 22:09:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:28.052 22:09:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.052 22:09:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:28.052 22:09:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:28.052 22:09:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:28.052 22:09:00 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.052 22:09:00 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.052 22:09:00 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.052 22:09:00 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.052 22:09:00 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.052 22:09:00 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:28.052 22:09:00 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:28.052 Unsupported workload type: foobar 00:07:28.052 [2024-07-23 22:09:00.076612] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:28.052 accel_perf options: 00:07:28.052 [-h help message] 00:07:28.052 [-q queue depth per core] 00:07:28.052 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:28.052 [-T number of threads per core 00:07:28.052 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:28.052 [-t time in seconds] 00:07:28.052 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:28.052 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:28.052 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:28.052 [-l for compress/decompress workloads, name of uncompressed input file 00:07:28.052 [-S for crc32c workload, use this seed value (default 0) 00:07:28.052 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:28.052 [-f for fill workload, use this BYTE value (default 255) 00:07:28.052 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:28.052 [-y verify result if this switch is on] 00:07:28.052 [-a tasks to allocate per core (default: same value as -q)] 00:07:28.052 Can be used to spread operations across a wider range of memory. 00:07:28.052 22:09:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:28.052 ************************************ 00:07:28.052 END TEST accel_wrong_workload 00:07:28.052 ************************************ 00:07:28.052 22:09:00 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.052 22:09:00 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.052 22:09:00 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.052 00:07:28.052 real 0m0.030s 00:07:28.052 user 0m0.017s 00:07:28.052 sys 0m0.013s 00:07:28.052 22:09:00 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.052 22:09:00 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:28.052 22:09:00 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:28.052 22:09:00 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:28.052 22:09:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.052 22:09:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.052 ************************************ 00:07:28.052 START TEST accel_negative_buffers 00:07:28.052 ************************************ 00:07:28.052 22:09:00 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:28.052 22:09:00 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:28.052 22:09:00 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:28.052 22:09:00 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:28.052 22:09:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.052 22:09:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:28.052 22:09:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.052 22:09:00 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:28.052 22:09:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:28.052 22:09:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:28.052 22:09:00 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.052 22:09:00 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.052 22:09:00 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.052 22:09:00 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.052 22:09:00 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.052 22:09:00 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:28.052 22:09:00 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:28.052 -x option must be non-negative. 00:07:28.052 [2024-07-23 22:09:00.163804] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:28.052 accel_perf options: 00:07:28.052 [-h help message] 00:07:28.052 [-q queue depth per core] 00:07:28.052 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:28.052 [-T number of threads per core 00:07:28.052 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:28.052 [-t time in seconds] 00:07:28.052 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:28.052 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:28.052 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:28.053 [-l for compress/decompress workloads, name of uncompressed input file 00:07:28.053 [-S for crc32c workload, use this seed value (default 0) 00:07:28.053 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:28.053 [-f for fill workload, use this BYTE value (default 255) 00:07:28.053 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:28.053 [-y verify result if this switch is on] 00:07:28.053 [-a tasks to allocate per core (default: same value as -q)] 00:07:28.053 Can be used to spread operations across a wider range of memory. 00:07:28.053 22:09:00 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:28.053 22:09:00 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.053 22:09:00 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.053 22:09:00 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.053 ************************************ 00:07:28.053 END TEST accel_negative_buffers 00:07:28.053 ************************************ 00:07:28.053 00:07:28.053 real 0m0.037s 00:07:28.053 user 0m0.015s 00:07:28.053 sys 0m0.019s 00:07:28.053 22:09:00 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.053 22:09:00 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:28.053 22:09:00 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:28.053 22:09:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:28.053 22:09:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.053 22:09:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.053 ************************************ 00:07:28.053 START TEST accel_crc32c 00:07:28.053 ************************************ 00:07:28.053 22:09:00 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:28.053 22:09:00 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:28.053 22:09:00 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:28.053 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.053 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.053 22:09:00 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:28.053 22:09:00 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:28.053 22:09:00 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:28.053 22:09:00 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.053 22:09:00 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.053 22:09:00 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.053 22:09:00 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.053 22:09:00 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.053 22:09:00 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:28.053 22:09:00 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:28.312 [2024-07-23 22:09:00.258204] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:28.312 [2024-07-23 22:09:00.258316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74870 ] 00:07:28.312 [2024-07-23 22:09:00.383369] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:28.312 [2024-07-23 22:09:00.400804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.312 [2024-07-23 22:09:00.450374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.312 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:28.598 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.598 22:09:00 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:28.598 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.598 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.598 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:28.598 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.598 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.598 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.598 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:28.598 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.598 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.599 22:09:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:29.558 22:09:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.558 00:07:29.558 real 0m1.396s 00:07:29.558 user 0m0.015s 00:07:29.558 sys 0m0.005s 00:07:29.558 22:09:01 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.558 22:09:01 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:29.558 ************************************ 00:07:29.558 END TEST accel_crc32c 00:07:29.558 ************************************ 00:07:29.558 22:09:01 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:29.558 22:09:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:29.558 22:09:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.558 22:09:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.558 ************************************ 00:07:29.558 START TEST accel_crc32c_C2 00:07:29.558 ************************************ 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:29.558 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:29.558 [2024-07-23 22:09:01.713694] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:29.558 [2024-07-23 22:09:01.714567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74905 ] 00:07:29.817 [2024-07-23 22:09:01.840243] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.817 [2024-07-23 22:09:01.857888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.817 [2024-07-23 22:09:01.908172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.817 22:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.193 00:07:31.193 real 0m1.406s 00:07:31.193 user 0m1.206s 00:07:31.193 sys 0m0.106s 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.193 22:09:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:31.193 ************************************ 00:07:31.193 END TEST accel_crc32c_C2 00:07:31.193 ************************************ 00:07:31.193 22:09:03 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:31.193 22:09:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:31.193 22:09:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.193 22:09:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.193 ************************************ 00:07:31.193 START TEST accel_copy 00:07:31.193 ************************************ 00:07:31.193 22:09:03 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:31.193 22:09:03 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:31.193 22:09:03 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:31.193 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.193 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.193 22:09:03 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:31.193 22:09:03 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:31.193 22:09:03 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:31.193 22:09:03 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.193 22:09:03 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.193 22:09:03 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.193 22:09:03 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.193 22:09:03 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.193 22:09:03 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:31.193 22:09:03 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:31.193 [2024-07-23 22:09:03.172266] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:31.193 [2024-07-23 22:09:03.172354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74939 ] 00:07:31.193 [2024-07-23 22:09:03.288777] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.193 [2024-07-23 22:09:03.303832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.193 [2024-07-23 22:09:03.353595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.452 22:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:32.388 22:09:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.388 00:07:32.388 real 0m1.381s 00:07:32.388 user 0m1.196s 00:07:32.388 sys 0m0.098s 00:07:32.388 22:09:04 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.388 22:09:04 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:32.388 ************************************ 00:07:32.388 END TEST accel_copy 00:07:32.388 ************************************ 00:07:32.647 22:09:04 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:32.647 22:09:04 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:32.647 22:09:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.647 22:09:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.647 ************************************ 00:07:32.647 START TEST accel_fill 00:07:32.647 ************************************ 00:07:32.647 22:09:04 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:32.647 22:09:04 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:32.647 22:09:04 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:32.647 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.647 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.647 22:09:04 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:32.647 22:09:04 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:32.647 22:09:04 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:32.647 22:09:04 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.647 22:09:04 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.647 22:09:04 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.647 22:09:04 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.647 22:09:04 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.647 22:09:04 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:32.647 22:09:04 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:32.647 [2024-07-23 22:09:04.629569] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:32.647 [2024-07-23 22:09:04.629666] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74968 ] 00:07:32.647 [2024-07-23 22:09:04.754614] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.647 [2024-07-23 22:09:04.773646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.647 [2024-07-23 22:09:04.823494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.906 22:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.842 22:09:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.842 22:09:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.842 22:09:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.842 22:09:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.842 22:09:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.842 22:09:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.842 22:09:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.842 22:09:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.842 22:09:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.842 22:09:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.842 22:09:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:33.842 22:09:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.842 00:07:33.842 real 0m1.404s 00:07:33.842 user 0m1.206s 00:07:33.842 sys 0m0.107s 00:07:33.842 22:09:06 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.842 22:09:06 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:33.842 ************************************ 00:07:33.842 END TEST accel_fill 00:07:33.842 ************************************ 00:07:34.101 22:09:06 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:34.101 22:09:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:34.101 22:09:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.101 22:09:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.101 ************************************ 00:07:34.101 START TEST accel_copy_crc32c 00:07:34.101 ************************************ 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:34.101 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:34.101 [2024-07-23 22:09:06.091126] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:34.101 [2024-07-23 22:09:06.092097] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75007 ] 00:07:34.101 [2024-07-23 22:09:06.217420] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:34.101 [2024-07-23 22:09:06.236829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.101 [2024-07-23 22:09:06.285770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.360 22:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.296 00:07:35.296 real 0m1.396s 00:07:35.296 user 0m1.197s 00:07:35.296 sys 0m0.107s 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.296 22:09:07 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:35.296 ************************************ 00:07:35.296 END TEST accel_copy_crc32c 00:07:35.296 ************************************ 00:07:35.554 22:09:07 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:35.554 22:09:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:35.554 22:09:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.554 22:09:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.554 ************************************ 00:07:35.554 START TEST accel_copy_crc32c_C2 00:07:35.554 ************************************ 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:35.554 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:35.554 [2024-07-23 22:09:07.552313] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:35.554 [2024-07-23 22:09:07.552420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75037 ] 00:07:35.554 [2024-07-23 22:09:07.678411] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:35.554 [2024-07-23 22:09:07.697084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.554 [2024-07-23 22:09:07.746420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.814 22:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.754 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.755 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.755 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.755 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.755 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.755 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.755 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.755 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.755 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:36.755 22:09:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.755 00:07:36.755 real 0m1.399s 00:07:36.755 user 0m1.198s 00:07:36.755 sys 0m0.109s 00:07:36.755 22:09:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.755 22:09:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:36.755 ************************************ 00:07:36.755 END TEST accel_copy_crc32c_C2 00:07:36.755 ************************************ 00:07:37.014 22:09:08 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:37.014 22:09:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:37.014 22:09:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.014 22:09:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.014 ************************************ 00:07:37.014 START TEST accel_dualcast 00:07:37.014 ************************************ 00:07:37.014 22:09:08 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:37.014 22:09:08 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:37.014 22:09:08 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:37.014 22:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.014 22:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.014 22:09:08 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:37.014 22:09:08 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:37.014 22:09:08 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:37.014 22:09:08 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.014 22:09:08 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.014 22:09:08 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.014 22:09:08 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.014 22:09:08 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.014 22:09:08 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:37.014 22:09:08 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:37.014 [2024-07-23 22:09:09.009883] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:37.014 [2024-07-23 22:09:09.009988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75072 ] 00:07:37.014 [2024-07-23 22:09:09.135229] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:37.014 [2024-07-23 22:09:09.151737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.014 [2024-07-23 22:09:09.200564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.274 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.274 22:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.274 22:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 22:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:38.213 22:09:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.213 00:07:38.213 real 0m1.402s 00:07:38.213 user 0m1.191s 00:07:38.213 sys 0m0.113s 00:07:38.213 22:09:10 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.213 22:09:10 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:38.213 ************************************ 00:07:38.213 END TEST accel_dualcast 00:07:38.213 ************************************ 00:07:38.481 22:09:10 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:38.481 22:09:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:38.481 22:09:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.481 22:09:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.481 ************************************ 00:07:38.481 START TEST accel_compare 00:07:38.481 ************************************ 00:07:38.481 22:09:10 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:38.481 22:09:10 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:38.481 22:09:10 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:38.481 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.481 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.481 22:09:10 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:38.481 22:09:10 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:38.481 22:09:10 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:38.481 22:09:10 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.481 22:09:10 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.481 22:09:10 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.481 22:09:10 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.481 22:09:10 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.481 22:09:10 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:38.481 22:09:10 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:38.481 [2024-07-23 22:09:10.471396] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:38.481 [2024-07-23 22:09:10.471503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75106 ] 00:07:38.481 [2024-07-23 22:09:10.597113] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.481 [2024-07-23 22:09:10.613971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.481 [2024-07-23 22:09:10.662818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.740 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.741 22:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:39.678 22:09:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.678 00:07:39.678 real 0m1.394s 00:07:39.678 user 0m0.008s 00:07:39.678 sys 0m0.008s 00:07:39.678 22:09:11 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.678 ************************************ 00:07:39.678 22:09:11 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:39.678 END TEST accel_compare 00:07:39.678 ************************************ 00:07:39.936 22:09:11 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:39.936 22:09:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:39.936 22:09:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.936 22:09:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.936 ************************************ 00:07:39.936 START TEST accel_xor 00:07:39.936 ************************************ 00:07:39.936 22:09:11 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:39.936 22:09:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:39.936 22:09:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:39.936 22:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.936 22:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.936 22:09:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:39.936 22:09:11 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:39.936 22:09:11 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:39.936 22:09:11 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.936 22:09:11 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.936 22:09:11 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.936 22:09:11 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.936 22:09:11 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.936 22:09:11 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:39.936 22:09:11 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:39.936 [2024-07-23 22:09:11.919451] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:39.936 [2024-07-23 22:09:11.919757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75141 ] 00:07:39.936 [2024-07-23 22:09:12.037092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.936 [2024-07-23 22:09:12.053346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.936 [2024-07-23 22:09:12.102362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.194 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.195 22:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.126 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.126 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.126 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.126 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.126 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.126 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.126 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.126 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.126 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.126 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.126 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.126 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:41.127 22:09:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.127 00:07:41.127 real 0m1.383s 00:07:41.127 user 0m1.194s 00:07:41.127 sys 0m0.092s 00:07:41.127 22:09:13 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.127 22:09:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:41.127 ************************************ 00:07:41.127 END TEST accel_xor 00:07:41.127 ************************************ 00:07:41.384 22:09:13 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:41.384 22:09:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:41.384 22:09:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.384 22:09:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.384 ************************************ 00:07:41.384 START TEST accel_xor 00:07:41.384 ************************************ 00:07:41.384 22:09:13 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:41.384 22:09:13 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:41.384 22:09:13 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:41.384 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.384 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.384 22:09:13 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:41.384 22:09:13 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:41.384 22:09:13 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:41.384 22:09:13 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.384 22:09:13 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.384 22:09:13 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.384 22:09:13 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.384 22:09:13 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.384 22:09:13 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:41.384 22:09:13 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:41.384 [2024-07-23 22:09:13.383006] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:41.384 [2024-07-23 22:09:13.383110] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75170 ] 00:07:41.384 [2024-07-23 22:09:13.508487] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.384 [2024-07-23 22:09:13.525796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.384 [2024-07-23 22:09:13.575188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.642 22:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.574 22:09:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.574 22:09:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.574 22:09:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.574 22:09:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.574 22:09:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.574 22:09:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.574 22:09:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.574 22:09:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:42.575 22:09:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.575 00:07:42.575 real 0m1.404s 00:07:42.575 user 0m1.206s 00:07:42.575 sys 0m0.101s 00:07:42.575 22:09:14 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.575 22:09:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:42.575 ************************************ 00:07:42.575 END TEST accel_xor 00:07:42.575 ************************************ 00:07:42.832 22:09:14 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:42.832 22:09:14 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:42.832 22:09:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.832 22:09:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.832 ************************************ 00:07:42.832 START TEST accel_dif_verify 00:07:42.832 ************************************ 00:07:42.832 22:09:14 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:42.832 22:09:14 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:42.832 22:09:14 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:42.832 22:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.832 22:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.832 22:09:14 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:42.832 22:09:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:42.832 22:09:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:42.832 22:09:14 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.832 22:09:14 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.832 22:09:14 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.832 22:09:14 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.832 22:09:14 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.832 22:09:14 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:42.832 22:09:14 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:42.832 [2024-07-23 22:09:14.853792] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:42.832 [2024-07-23 22:09:14.854152] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75204 ] 00:07:42.832 [2024-07-23 22:09:14.979383] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:42.832 [2024-07-23 22:09:14.999194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.090 [2024-07-23 22:09:15.056545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.090 22:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.463 22:09:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:44.463 ************************************ 00:07:44.463 END TEST accel_dif_verify 00:07:44.464 ************************************ 00:07:44.464 22:09:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.464 00:07:44.464 real 0m1.421s 00:07:44.464 user 0m1.216s 00:07:44.464 sys 0m0.110s 00:07:44.464 22:09:16 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.464 22:09:16 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:44.464 22:09:16 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:44.464 22:09:16 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:44.464 22:09:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.464 22:09:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.464 ************************************ 00:07:44.464 START TEST accel_dif_generate 00:07:44.464 ************************************ 00:07:44.464 22:09:16 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:44.464 [2024-07-23 22:09:16.330990] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:44.464 [2024-07-23 22:09:16.331093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75239 ] 00:07:44.464 [2024-07-23 22:09:16.456040] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.464 [2024-07-23 22:09:16.471857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.464 [2024-07-23 22:09:16.520990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.464 22:09:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:45.837 22:09:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.837 00:07:45.837 real 0m1.394s 00:07:45.837 user 0m0.012s 00:07:45.837 sys 0m0.005s 00:07:45.837 22:09:17 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.837 ************************************ 00:07:45.837 END TEST accel_dif_generate 00:07:45.837 ************************************ 00:07:45.837 22:09:17 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:45.837 22:09:17 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:45.837 22:09:17 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:45.837 22:09:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.837 22:09:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.837 ************************************ 00:07:45.837 START TEST accel_dif_generate_copy 00:07:45.837 ************************************ 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:45.837 22:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:45.837 [2024-07-23 22:09:17.784770] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:45.837 [2024-07-23 22:09:17.784904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75273 ] 00:07:45.837 [2024-07-23 22:09:17.910101] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.837 [2024-07-23 22:09:17.926802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.837 [2024-07-23 22:09:17.976444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.837 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.095 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.096 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.096 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.096 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:46.096 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.096 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.096 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.096 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.096 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.096 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.096 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.096 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.096 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.096 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.096 22:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.029 ************************************ 00:07:47.029 END TEST accel_dif_generate_copy 00:07:47.029 ************************************ 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.029 00:07:47.029 real 0m1.399s 00:07:47.029 user 0m1.212s 00:07:47.029 sys 0m0.096s 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.029 22:09:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:47.029 22:09:19 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:47.029 22:09:19 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.029 22:09:19 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:47.029 22:09:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.029 22:09:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.029 ************************************ 00:07:47.030 START TEST accel_comp 00:07:47.030 ************************************ 00:07:47.030 22:09:19 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.030 22:09:19 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:47.030 22:09:19 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:47.030 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.030 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.030 22:09:19 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.030 22:09:19 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.030 22:09:19 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:47.030 22:09:19 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.030 22:09:19 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.030 22:09:19 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.030 22:09:19 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.030 22:09:19 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.030 22:09:19 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:47.030 22:09:19 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:47.288 [2024-07-23 22:09:19.237857] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:47.288 [2024-07-23 22:09:19.237973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75308 ] 00:07:47.288 [2024-07-23 22:09:19.359111] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:47.288 [2024-07-23 22:09:19.373926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.288 [2024-07-23 22:09:19.423123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.288 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.289 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.547 22:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.517 ************************************ 00:07:48.517 END TEST accel_comp 00:07:48.517 ************************************ 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:48.517 22:09:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.517 00:07:48.517 real 0m1.399s 00:07:48.517 user 0m1.198s 00:07:48.517 sys 0m0.113s 00:07:48.517 22:09:20 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.517 22:09:20 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:48.517 22:09:20 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:48.517 22:09:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:48.517 22:09:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.517 22:09:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.517 ************************************ 00:07:48.517 START TEST accel_decomp 00:07:48.517 ************************************ 00:07:48.517 22:09:20 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:48.517 22:09:20 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:48.517 22:09:20 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:48.517 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.517 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.517 22:09:20 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:48.517 22:09:20 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:48.517 22:09:20 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:48.517 22:09:20 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.517 22:09:20 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.517 22:09:20 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.517 22:09:20 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.517 22:09:20 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.517 22:09:20 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:48.517 22:09:20 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:48.517 [2024-07-23 22:09:20.695936] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:48.517 [2024-07-23 22:09:20.696039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75337 ] 00:07:48.775 [2024-07-23 22:09:20.821673] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:48.775 [2024-07-23 22:09:20.841876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.775 [2024-07-23 22:09:20.893133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.775 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.776 22:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:50.151 ************************************ 00:07:50.151 END TEST accel_decomp 00:07:50.151 ************************************ 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.151 22:09:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.151 00:07:50.151 real 0m1.413s 00:07:50.151 user 0m1.212s 00:07:50.151 sys 0m0.111s 00:07:50.151 22:09:22 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.151 22:09:22 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:50.151 22:09:22 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:50.151 22:09:22 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:50.151 22:09:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.151 22:09:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.151 ************************************ 00:07:50.151 START TEST accel_decomp_full 00:07:50.151 ************************************ 00:07:50.151 22:09:22 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:50.151 22:09:22 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:50.151 22:09:22 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:50.151 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.151 22:09:22 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:50.151 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.151 22:09:22 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:50.151 22:09:22 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:50.151 22:09:22 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.151 22:09:22 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.151 22:09:22 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.151 22:09:22 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.151 22:09:22 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.151 22:09:22 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:50.151 22:09:22 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:50.151 [2024-07-23 22:09:22.159975] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:50.151 [2024-07-23 22:09:22.160058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75377 ] 00:07:50.151 [2024-07-23 22:09:22.277181] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.151 [2024-07-23 22:09:22.297256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.411 [2024-07-23 22:09:22.354161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:50.411 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.412 22:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:51.790 22:09:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.790 00:07:51.790 real 0m1.418s 00:07:51.790 user 0m1.226s 00:07:51.790 sys 0m0.104s 00:07:51.790 22:09:23 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.790 22:09:23 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:51.790 ************************************ 00:07:51.790 END TEST accel_decomp_full 00:07:51.790 ************************************ 00:07:51.790 22:09:23 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:51.790 22:09:23 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:51.790 22:09:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.790 22:09:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.790 ************************************ 00:07:51.791 START TEST accel_decomp_mcore 00:07:51.791 ************************************ 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:51.791 [2024-07-23 22:09:23.638299] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:51.791 [2024-07-23 22:09:23.638669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75406 ] 00:07:51.791 [2024-07-23 22:09:23.764914] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:51.791 [2024-07-23 22:09:23.785193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.791 [2024-07-23 22:09:23.847141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.791 [2024-07-23 22:09:23.847290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.791 [2024-07-23 22:09:23.847478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.791 [2024-07-23 22:09:23.848100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.791 22:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.167 00:07:53.167 real 0m1.430s 00:07:53.167 user 0m4.537s 00:07:53.167 sys 0m0.130s 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.167 ************************************ 00:07:53.167 22:09:25 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:53.167 END TEST accel_decomp_mcore 00:07:53.167 ************************************ 00:07:53.167 22:09:25 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:53.167 22:09:25 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:53.167 22:09:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.167 22:09:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.167 ************************************ 00:07:53.167 START TEST accel_decomp_full_mcore 00:07:53.167 ************************************ 00:07:53.167 22:09:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:53.167 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:53.167 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:53.167 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.167 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.167 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:53.167 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:53.167 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:53.167 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.167 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.168 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.168 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.168 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.168 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:53.168 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:53.168 [2024-07-23 22:09:25.128524] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:53.168 [2024-07-23 22:09:25.128627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75439 ] 00:07:53.168 [2024-07-23 22:09:25.254753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:53.168 [2024-07-23 22:09:25.274712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.168 [2024-07-23 22:09:25.335078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.168 [2024-07-23 22:09:25.335250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.168 [2024-07-23 22:09:25.335413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.168 [2024-07-23 22:09:25.335563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.426 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.427 22:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.361 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.361 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.362 00:07:54.362 real 0m1.436s 00:07:54.362 user 0m4.588s 00:07:54.362 sys 0m0.120s 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.362 22:09:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:54.362 ************************************ 00:07:54.362 END TEST accel_decomp_full_mcore 00:07:54.362 ************************************ 00:07:54.621 22:09:26 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:54.621 22:09:26 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:54.621 22:09:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.621 22:09:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.621 ************************************ 00:07:54.621 START TEST accel_decomp_mthread 00:07:54.621 ************************************ 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:54.621 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:54.621 [2024-07-23 22:09:26.619479] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:54.621 [2024-07-23 22:09:26.619555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75481 ] 00:07:54.621 [2024-07-23 22:09:26.735718] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.621 [2024-07-23 22:09:26.750118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.621 [2024-07-23 22:09:26.798689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.879 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.880 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.880 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.880 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:54.880 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.880 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.880 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.880 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.880 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.880 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.880 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.880 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.880 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.880 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.880 22:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.816 ************************************ 00:07:55.816 END TEST accel_decomp_mthread 00:07:55.816 ************************************ 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.816 00:07:55.816 real 0m1.381s 00:07:55.816 user 0m1.199s 00:07:55.816 sys 0m0.093s 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.816 22:09:27 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:56.075 22:09:28 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.075 22:09:28 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:56.075 22:09:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.075 22:09:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.075 ************************************ 00:07:56.075 START TEST accel_decomp_full_mthread 00:07:56.075 ************************************ 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:56.075 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:56.075 [2024-07-23 22:09:28.079190] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:56.075 [2024-07-23 22:09:28.079555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75510 ] 00:07:56.075 [2024-07-23 22:09:28.204991] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:56.075 [2024-07-23 22:09:28.217962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.075 [2024-07-23 22:09:28.268956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.335 22:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.710 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.711 ************************************ 00:07:57.711 END TEST accel_decomp_full_mthread 00:07:57.711 ************************************ 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.711 00:07:57.711 real 0m1.429s 00:07:57.711 user 0m1.227s 00:07:57.711 sys 0m0.110s 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.711 22:09:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:57.711 22:09:29 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:57.711 22:09:29 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:57.711 22:09:29 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:57.711 22:09:29 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:57.711 22:09:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.711 22:09:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.711 22:09:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.711 22:09:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.711 22:09:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.711 22:09:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.711 22:09:29 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.711 22:09:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:57.711 22:09:29 accel -- accel/accel.sh@41 -- # jq -r . 00:07:57.711 ************************************ 00:07:57.711 START TEST accel_dif_functional_tests 00:07:57.711 ************************************ 00:07:57.711 22:09:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:57.711 [2024-07-23 22:09:29.575326] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:57.711 [2024-07-23 22:09:29.575400] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75551 ] 00:07:57.711 [2024-07-23 22:09:29.692736] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.711 [2024-07-23 22:09:29.707570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.711 [2024-07-23 22:09:29.758141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.711 [2024-07-23 22:09:29.758332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.711 [2024-07-23 22:09:29.758335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.711 [2024-07-23 22:09:29.800400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:57.711 00:07:57.711 00:07:57.711 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.711 http://cunit.sourceforge.net/ 00:07:57.711 00:07:57.711 00:07:57.711 Suite: accel_dif 00:07:57.711 Test: verify: DIF generated, GUARD check ...passed 00:07:57.711 Test: verify: DIF generated, APPTAG check ...passed 00:07:57.711 Test: verify: DIF generated, REFTAG check ...passed 00:07:57.711 Test: verify: DIF not generated, GUARD check ...passed 00:07:57.711 Test: verify: DIF not generated, APPTAG check ...passed 00:07:57.711 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 22:09:29.824735] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:57.711 [2024-07-23 22:09:29.824899] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:57.711 [2024-07-23 22:09:29.824934] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:57.711 passed 00:07:57.711 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:57.711 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:57.711 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:57.711 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-23 22:09:29.825050] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:57.711 passed 00:07:57.711 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:57.711 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed[2024-07-23 22:09:29.825286] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:57.711 00:07:57.711 Test: verify copy: DIF generated, GUARD check ...passed 00:07:57.711 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:57.711 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:57.711 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:57.711 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-23 22:09:29.825521] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:57.711 [2024-07-23 22:09:29.825603] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:57.711 passed 00:07:57.711 Test: verify copy: DIF not generated, REFTAG check ...passed 00:07:57.711 Test: generate copy: DIF generated, GUARD check ...[2024-07-23 22:09:29.825637] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:57.711 passed 00:07:57.711 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:57.711 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:57.711 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:57.711 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:57.711 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:57.711 Test: generate copy: iovecs-len validate ...passed 00:07:57.711 Test: generate copy: buffer alignment validate ...[2024-07-23 22:09:29.826018] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:57.711 passed 00:07:57.711 00:07:57.711 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.711 suites 1 1 n/a 0 0 00:07:57.711 tests 26 26 26 0 0 00:07:57.711 asserts 115 115 115 0 n/a 00:07:57.711 00:07:57.711 Elapsed time = 0.004 seconds 00:07:57.976 00:07:57.976 real 0m0.463s 00:07:57.976 user 0m0.577s 00:07:57.976 sys 0m0.132s 00:07:57.976 22:09:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.976 ************************************ 00:07:57.976 END TEST accel_dif_functional_tests 00:07:57.976 ************************************ 00:07:57.976 22:09:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:57.976 ************************************ 00:07:57.976 END TEST accel 00:07:57.976 ************************************ 00:07:57.976 00:07:57.976 real 0m32.544s 00:07:57.976 user 0m34.114s 00:07:57.976 sys 0m3.903s 00:07:57.976 22:09:30 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.976 22:09:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.976 22:09:30 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:57.976 22:09:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:57.976 22:09:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.977 22:09:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.977 ************************************ 00:07:57.977 START TEST accel_rpc 00:07:57.977 ************************************ 00:07:57.977 22:09:30 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:58.250 * Looking for test storage... 00:07:58.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:58.250 22:09:30 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:58.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.250 22:09:30 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=75610 00:07:58.250 22:09:30 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 75610 00:07:58.250 22:09:30 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 75610 ']' 00:07:58.250 22:09:30 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:58.250 22:09:30 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.250 22:09:30 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.250 22:09:30 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.250 22:09:30 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.250 22:09:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.250 [2024-07-23 22:09:30.277366] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:58.250 [2024-07-23 22:09:30.277722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75610 ] 00:07:58.250 [2024-07-23 22:09:30.404592] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:58.251 [2024-07-23 22:09:30.420548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.510 [2024-07-23 22:09:30.470183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.510 22:09:30 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.510 22:09:30 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:58.510 22:09:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:58.510 22:09:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:58.510 22:09:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:58.510 22:09:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:58.510 22:09:30 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:58.510 22:09:30 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:58.510 22:09:30 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.510 22:09:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.510 ************************************ 00:07:58.510 START TEST accel_assign_opcode 00:07:58.510 ************************************ 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.510 [2024-07-23 22:09:30.514606] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.510 [2024-07-23 22:09:30.522591] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.510 [2024-07-23 22:09:30.571548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:58.510 22:09:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:58.769 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.769 software 00:07:58.769 ************************************ 00:07:58.769 END TEST accel_assign_opcode 00:07:58.769 ************************************ 00:07:58.769 00:07:58.769 real 0m0.228s 00:07:58.769 user 0m0.047s 00:07:58.769 sys 0m0.016s 00:07:58.769 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.769 22:09:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.769 22:09:30 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 75610 00:07:58.769 22:09:30 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 75610 ']' 00:07:58.769 22:09:30 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 75610 00:07:58.769 22:09:30 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:58.770 22:09:30 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.770 22:09:30 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75610 00:07:58.770 killing process with pid 75610 00:07:58.770 22:09:30 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.770 22:09:30 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.770 22:09:30 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75610' 00:07:58.770 22:09:30 accel_rpc -- common/autotest_common.sh@967 -- # kill 75610 00:07:58.770 22:09:30 accel_rpc -- common/autotest_common.sh@972 -- # wait 75610 00:07:59.029 00:07:59.029 real 0m1.028s 00:07:59.029 user 0m0.925s 00:07:59.029 sys 0m0.433s 00:07:59.029 22:09:31 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.029 ************************************ 00:07:59.029 END TEST accel_rpc 00:07:59.029 ************************************ 00:07:59.029 22:09:31 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.029 22:09:31 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:59.029 22:09:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.029 22:09:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.029 22:09:31 -- common/autotest_common.sh@10 -- # set +x 00:07:59.029 ************************************ 00:07:59.029 START TEST app_cmdline 00:07:59.029 ************************************ 00:07:59.029 22:09:31 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:59.288 * Looking for test storage... 00:07:59.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:59.288 22:09:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:59.288 22:09:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=75691 00:07:59.288 22:09:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 75691 00:07:59.288 22:09:31 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:59.288 22:09:31 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 75691 ']' 00:07:59.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.288 22:09:31 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.288 22:09:31 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.288 22:09:31 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.288 22:09:31 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.288 22:09:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:59.288 [2024-07-23 22:09:31.360082] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:07:59.288 [2024-07-23 22:09:31.360390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75691 ] 00:07:59.547 [2024-07-23 22:09:31.487709] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:59.547 [2024-07-23 22:09:31.506022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.547 [2024-07-23 22:09:31.555109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.547 [2024-07-23 22:09:31.596519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:00.482 22:09:32 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:00.482 { 00:08:00.482 "version": "SPDK v24.09-pre git sha1 78cbcfdde", 00:08:00.482 "fields": { 00:08:00.482 "major": 24, 00:08:00.482 "minor": 9, 00:08:00.482 "patch": 0, 00:08:00.482 "suffix": "-pre", 00:08:00.482 "commit": "78cbcfdde" 00:08:00.482 } 00:08:00.482 } 00:08:00.482 22:09:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:00.482 22:09:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:00.482 22:09:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:00.482 22:09:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:00.482 22:09:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.482 22:09:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:00.482 22:09:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.482 22:09:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:00.482 22:09:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:00.482 22:09:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:00.482 22:09:32 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.742 request: 00:08:00.742 { 00:08:00.742 "method": "env_dpdk_get_mem_stats", 00:08:00.742 "req_id": 1 00:08:00.742 } 00:08:00.742 Got JSON-RPC error response 00:08:00.742 response: 00:08:00.742 { 00:08:00.742 "code": -32601, 00:08:00.742 "message": "Method not found" 00:08:00.742 } 00:08:00.742 22:09:32 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:00.742 22:09:32 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:00.742 22:09:32 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:00.742 22:09:32 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:00.742 22:09:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 75691 00:08:00.742 22:09:32 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 75691 ']' 00:08:00.742 22:09:32 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 75691 00:08:00.742 22:09:32 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:00.742 22:09:32 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.742 22:09:32 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75691 00:08:00.742 killing process with pid 75691 00:08:00.742 22:09:32 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:00.742 22:09:32 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:00.742 22:09:32 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75691' 00:08:00.742 22:09:32 app_cmdline -- common/autotest_common.sh@967 -- # kill 75691 00:08:00.742 22:09:32 app_cmdline -- common/autotest_common.sh@972 -- # wait 75691 00:08:01.000 00:08:01.000 real 0m1.940s 00:08:01.000 user 0m2.384s 00:08:01.000 sys 0m0.448s 00:08:01.000 ************************************ 00:08:01.000 END TEST app_cmdline 00:08:01.000 ************************************ 00:08:01.000 22:09:33 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.000 22:09:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:01.000 22:09:33 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:01.000 22:09:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:01.000 22:09:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.000 22:09:33 -- common/autotest_common.sh@10 -- # set +x 00:08:01.259 ************************************ 00:08:01.259 START TEST version 00:08:01.259 ************************************ 00:08:01.259 22:09:33 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:01.259 * Looking for test storage... 00:08:01.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:01.259 22:09:33 version -- app/version.sh@17 -- # get_header_version major 00:08:01.259 22:09:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.259 22:09:33 version -- app/version.sh@14 -- # cut -f2 00:08:01.259 22:09:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.259 22:09:33 version -- app/version.sh@17 -- # major=24 00:08:01.259 22:09:33 version -- app/version.sh@18 -- # get_header_version minor 00:08:01.259 22:09:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.259 22:09:33 version -- app/version.sh@14 -- # cut -f2 00:08:01.259 22:09:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.259 22:09:33 version -- app/version.sh@18 -- # minor=9 00:08:01.259 22:09:33 version -- app/version.sh@19 -- # get_header_version patch 00:08:01.259 22:09:33 version -- app/version.sh@14 -- # cut -f2 00:08:01.259 22:09:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.259 22:09:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.259 22:09:33 version -- app/version.sh@19 -- # patch=0 00:08:01.259 22:09:33 version -- app/version.sh@20 -- # get_header_version suffix 00:08:01.259 22:09:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.259 22:09:33 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.259 22:09:33 version -- app/version.sh@14 -- # cut -f2 00:08:01.259 22:09:33 version -- app/version.sh@20 -- # suffix=-pre 00:08:01.259 22:09:33 version -- app/version.sh@22 -- # version=24.9 00:08:01.259 22:09:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:01.259 22:09:33 version -- app/version.sh@28 -- # version=24.9rc0 00:08:01.259 22:09:33 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:01.259 22:09:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:01.259 22:09:33 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:01.259 22:09:33 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:01.259 00:08:01.259 real 0m0.169s 00:08:01.259 user 0m0.106s 00:08:01.259 sys 0m0.104s 00:08:01.259 ************************************ 00:08:01.259 END TEST version 00:08:01.259 ************************************ 00:08:01.259 22:09:33 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.259 22:09:33 version -- common/autotest_common.sh@10 -- # set +x 00:08:01.259 22:09:33 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:01.259 22:09:33 -- spdk/autotest.sh@198 -- # uname -s 00:08:01.259 22:09:33 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:01.259 22:09:33 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:01.259 22:09:33 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:08:01.259 22:09:33 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:08:01.259 22:09:33 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:01.259 22:09:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:01.259 22:09:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.259 22:09:33 -- common/autotest_common.sh@10 -- # set +x 00:08:01.259 ************************************ 00:08:01.259 START TEST spdk_dd 00:08:01.259 ************************************ 00:08:01.259 22:09:33 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:01.518 * Looking for test storage... 00:08:01.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:01.518 22:09:33 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:01.518 22:09:33 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.518 22:09:33 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.518 22:09:33 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.518 22:09:33 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.518 22:09:33 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.518 22:09:33 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.518 22:09:33 spdk_dd -- paths/export.sh@5 -- # export PATH 00:08:01.518 22:09:33 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.518 22:09:33 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:01.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:01.776 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:01.776 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:01.776 22:09:33 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:02.036 22:09:33 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@230 -- # local class 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@232 -- # local progif 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@233 -- # class=01 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:08:02.036 22:09:33 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:08:02.036 22:09:34 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@15 -- # local i 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@24 -- # return 0 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@15 -- # local i 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@24 -- # return 0 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:08:02.037 22:09:34 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:02.037 22:09:34 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@139 -- # local lib 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.037 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:02.038 * spdk_dd linked to liburing 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:02.038 22:09:34 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:02.038 22:09:34 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:08:02.039 22:09:34 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:08:02.039 22:09:34 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:08:02.039 22:09:34 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:08:02.039 22:09:34 spdk_dd -- dd/common.sh@153 -- # return 0 00:08:02.039 22:09:34 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:02.039 22:09:34 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:02.039 22:09:34 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:02.039 22:09:34 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.039 22:09:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:02.039 ************************************ 00:08:02.039 START TEST spdk_dd_basic_rw 00:08:02.039 ************************************ 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:02.039 * Looking for test storage... 00:08:02.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:08:02.039 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:08:02.300 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:02.301 ************************************ 00:08:02.301 START TEST dd_bs_lt_native_bs 00:08:02.301 ************************************ 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.301 22:09:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.302 22:09:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.302 22:09:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.302 22:09:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.302 22:09:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.302 22:09:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.302 22:09:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.302 22:09:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:02.302 { 00:08:02.302 "subsystems": [ 00:08:02.302 { 00:08:02.302 "subsystem": "bdev", 00:08:02.302 "config": [ 00:08:02.302 { 00:08:02.302 "params": { 00:08:02.302 "trtype": "pcie", 00:08:02.302 "traddr": "0000:00:10.0", 00:08:02.302 "name": "Nvme0" 00:08:02.302 }, 00:08:02.302 "method": "bdev_nvme_attach_controller" 00:08:02.302 }, 00:08:02.302 { 00:08:02.302 "method": "bdev_wait_for_examine" 00:08:02.302 } 00:08:02.302 ] 00:08:02.302 } 00:08:02.302 ] 00:08:02.302 } 00:08:02.302 [2024-07-23 22:09:34.466775] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:02.302 [2024-07-23 22:09:34.467422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76020 ] 00:08:02.560 [2024-07-23 22:09:34.593868] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.560 [2024-07-23 22:09:34.615453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.560 [2024-07-23 22:09:34.672523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.560 [2024-07-23 22:09:34.721022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:02.819 [2024-07-23 22:09:34.819336] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:02.819 [2024-07-23 22:09:34.819420] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:02.819 [2024-07-23 22:09:34.923559] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:02.819 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:08:02.819 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.819 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:08:02.819 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:08:02.819 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:08:02.819 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.819 00:08:02.819 real 0m0.594s 00:08:02.819 user 0m0.376s 00:08:02.819 sys 0m0.165s 00:08:02.819 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.819 ************************************ 00:08:02.819 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:08:02.819 END TEST dd_bs_lt_native_bs 00:08:02.819 ************************************ 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.078 ************************************ 00:08:03.078 START TEST dd_rw 00:08:03.078 ************************************ 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:03.078 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.645 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:03.645 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:03.645 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:03.645 22:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.646 [2024-07-23 22:09:35.707156] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:03.646 [2024-07-23 22:09:35.707257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76051 ] 00:08:03.646 { 00:08:03.646 "subsystems": [ 00:08:03.646 { 00:08:03.646 "subsystem": "bdev", 00:08:03.646 "config": [ 00:08:03.646 { 00:08:03.646 "params": { 00:08:03.646 "trtype": "pcie", 00:08:03.646 "traddr": "0000:00:10.0", 00:08:03.646 "name": "Nvme0" 00:08:03.646 }, 00:08:03.646 "method": "bdev_nvme_attach_controller" 00:08:03.646 }, 00:08:03.646 { 00:08:03.646 "method": "bdev_wait_for_examine" 00:08:03.646 } 00:08:03.646 ] 00:08:03.646 } 00:08:03.646 ] 00:08:03.646 } 00:08:03.646 [2024-07-23 22:09:35.833223] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.904 [2024-07-23 22:09:35.851552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.904 [2024-07-23 22:09:35.909092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.904 [2024-07-23 22:09:35.957444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.163  Copying: 60/60 [kB] (average 19 MBps) 00:08:04.163 00:08:04.163 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:04.163 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:04.163 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:04.163 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.163 { 00:08:04.163 "subsystems": [ 00:08:04.163 { 00:08:04.163 "subsystem": "bdev", 00:08:04.163 "config": [ 00:08:04.163 { 00:08:04.163 "params": { 00:08:04.163 "trtype": "pcie", 00:08:04.163 "traddr": "0000:00:10.0", 00:08:04.163 "name": "Nvme0" 00:08:04.163 }, 00:08:04.163 "method": "bdev_nvme_attach_controller" 00:08:04.163 }, 00:08:04.163 { 00:08:04.163 "method": "bdev_wait_for_examine" 00:08:04.163 } 00:08:04.163 ] 00:08:04.163 } 00:08:04.163 ] 00:08:04.163 } 00:08:04.163 [2024-07-23 22:09:36.297600] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:04.163 [2024-07-23 22:09:36.298276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76064 ] 00:08:04.421 [2024-07-23 22:09:36.425030] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:04.421 [2024-07-23 22:09:36.442255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.421 [2024-07-23 22:09:36.495266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.421 [2024-07-23 22:09:36.539025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.678  Copying: 60/60 [kB] (average 19 MBps) 00:08:04.678 00:08:04.678 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:04.678 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:04.679 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:04.679 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:04.679 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:04.679 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:04.679 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:04.679 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:04.679 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:04.679 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:04.679 22:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.936 { 00:08:04.937 "subsystems": [ 00:08:04.937 { 00:08:04.937 "subsystem": "bdev", 00:08:04.937 "config": [ 00:08:04.937 { 00:08:04.937 "params": { 00:08:04.937 "trtype": "pcie", 00:08:04.937 "traddr": "0000:00:10.0", 00:08:04.937 "name": "Nvme0" 00:08:04.937 }, 00:08:04.937 "method": "bdev_nvme_attach_controller" 00:08:04.937 }, 00:08:04.937 { 00:08:04.937 "method": "bdev_wait_for_examine" 00:08:04.937 } 00:08:04.937 ] 00:08:04.937 } 00:08:04.937 ] 00:08:04.937 } 00:08:04.937 [2024-07-23 22:09:36.878530] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:04.937 [2024-07-23 22:09:36.878621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76085 ] 00:08:04.937 [2024-07-23 22:09:37.005389] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:04.937 [2024-07-23 22:09:37.020839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.937 [2024-07-23 22:09:37.078294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.937 [2024-07-23 22:09:37.119910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:05.453  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:05.453 00:08:05.453 22:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:05.453 22:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:05.453 22:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:05.453 22:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:05.453 22:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:05.453 22:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:05.453 22:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.018 22:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:06.018 22:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:06.018 22:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:06.018 22:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.018 { 00:08:06.018 "subsystems": [ 00:08:06.018 { 00:08:06.018 "subsystem": "bdev", 00:08:06.018 "config": [ 00:08:06.018 { 00:08:06.018 "params": { 00:08:06.018 "trtype": "pcie", 00:08:06.018 "traddr": "0000:00:10.0", 00:08:06.018 "name": "Nvme0" 00:08:06.018 }, 00:08:06.018 "method": "bdev_nvme_attach_controller" 00:08:06.018 }, 00:08:06.018 { 00:08:06.018 "method": "bdev_wait_for_examine" 00:08:06.018 } 00:08:06.018 ] 00:08:06.018 } 00:08:06.018 ] 00:08:06.018 } 00:08:06.018 [2024-07-23 22:09:38.047431] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:06.018 [2024-07-23 22:09:38.047547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76104 ] 00:08:06.018 [2024-07-23 22:09:38.174466] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.018 [2024-07-23 22:09:38.192638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.275 [2024-07-23 22:09:38.241177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.275 [2024-07-23 22:09:38.282508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:06.533  Copying: 60/60 [kB] (average 58 MBps) 00:08:06.533 00:08:06.533 22:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:06.533 22:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:06.533 22:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:06.533 22:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.533 { 00:08:06.533 "subsystems": [ 00:08:06.533 { 00:08:06.533 "subsystem": "bdev", 00:08:06.533 "config": [ 00:08:06.533 { 00:08:06.533 "params": { 00:08:06.533 "trtype": "pcie", 00:08:06.533 "traddr": "0000:00:10.0", 00:08:06.533 "name": "Nvme0" 00:08:06.533 }, 00:08:06.533 "method": "bdev_nvme_attach_controller" 00:08:06.533 }, 00:08:06.533 { 00:08:06.533 "method": "bdev_wait_for_examine" 00:08:06.533 } 00:08:06.533 ] 00:08:06.533 } 00:08:06.533 ] 00:08:06.533 } 00:08:06.533 [2024-07-23 22:09:38.605002] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:06.533 [2024-07-23 22:09:38.605094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76118 ] 00:08:06.791 [2024-07-23 22:09:38.731543] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.792 [2024-07-23 22:09:38.749404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.792 [2024-07-23 22:09:38.798331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.792 [2024-07-23 22:09:38.839650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.050  Copying: 60/60 [kB] (average 29 MBps) 00:08:07.050 00:08:07.050 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.050 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:07.050 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:07.050 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:07.050 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:07.050 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:07.050 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:07.050 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:07.050 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:07.050 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:07.050 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:07.050 { 00:08:07.050 "subsystems": [ 00:08:07.050 { 00:08:07.050 "subsystem": "bdev", 00:08:07.050 "config": [ 00:08:07.050 { 00:08:07.050 "params": { 00:08:07.050 "trtype": "pcie", 00:08:07.050 "traddr": "0000:00:10.0", 00:08:07.050 "name": "Nvme0" 00:08:07.050 }, 00:08:07.050 "method": "bdev_nvme_attach_controller" 00:08:07.050 }, 00:08:07.050 { 00:08:07.050 "method": "bdev_wait_for_examine" 00:08:07.050 } 00:08:07.050 ] 00:08:07.050 } 00:08:07.050 ] 00:08:07.050 } 00:08:07.050 [2024-07-23 22:09:39.174331] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:07.050 [2024-07-23 22:09:39.174435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76133 ] 00:08:07.308 [2024-07-23 22:09:39.300703] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:07.308 [2024-07-23 22:09:39.318783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.308 [2024-07-23 22:09:39.367567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.308 [2024-07-23 22:09:39.409254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.566  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:07.566 00:08:07.566 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:07.566 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:07.566 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:07.566 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:07.567 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:07.567 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:07.567 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:07.567 22:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.133 22:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:08.133 22:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:08.133 22:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:08.133 22:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.133 [2024-07-23 22:09:40.271046] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:08.133 [2024-07-23 22:09:40.271151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76152 ] 00:08:08.133 { 00:08:08.133 "subsystems": [ 00:08:08.133 { 00:08:08.133 "subsystem": "bdev", 00:08:08.133 "config": [ 00:08:08.133 { 00:08:08.133 "params": { 00:08:08.133 "trtype": "pcie", 00:08:08.133 "traddr": "0000:00:10.0", 00:08:08.133 "name": "Nvme0" 00:08:08.133 }, 00:08:08.133 "method": "bdev_nvme_attach_controller" 00:08:08.133 }, 00:08:08.133 { 00:08:08.133 "method": "bdev_wait_for_examine" 00:08:08.133 } 00:08:08.133 ] 00:08:08.133 } 00:08:08.133 ] 00:08:08.133 } 00:08:08.390 [2024-07-23 22:09:40.397609] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:08.391 [2024-07-23 22:09:40.416775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.391 [2024-07-23 22:09:40.465469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.391 [2024-07-23 22:09:40.506620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.648  Copying: 56/56 [kB] (average 27 MBps) 00:08:08.648 00:08:08.648 22:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:08.648 22:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:08.648 22:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:08.648 22:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.648 { 00:08:08.648 "subsystems": [ 00:08:08.648 { 00:08:08.648 "subsystem": "bdev", 00:08:08.648 "config": [ 00:08:08.648 { 00:08:08.648 "params": { 00:08:08.648 "trtype": "pcie", 00:08:08.648 "traddr": "0000:00:10.0", 00:08:08.648 "name": "Nvme0" 00:08:08.648 }, 00:08:08.648 "method": "bdev_nvme_attach_controller" 00:08:08.648 }, 00:08:08.648 { 00:08:08.648 "method": "bdev_wait_for_examine" 00:08:08.648 } 00:08:08.648 ] 00:08:08.648 } 00:08:08.648 ] 00:08:08.648 } 00:08:08.648 [2024-07-23 22:09:40.833458] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:08.649 [2024-07-23 22:09:40.833575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76166 ] 00:08:08.905 [2024-07-23 22:09:40.960388] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:08.905 [2024-07-23 22:09:40.977092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.905 [2024-07-23 22:09:41.025950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.905 [2024-07-23 22:09:41.067418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:09.163  Copying: 56/56 [kB] (average 27 MBps) 00:08:09.163 00:08:09.163 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.163 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:09.163 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:09.163 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:09.163 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:09.163 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:09.163 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:09.163 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:09.163 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:09.163 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:09.163 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.422 { 00:08:09.422 "subsystems": [ 00:08:09.422 { 00:08:09.422 "subsystem": "bdev", 00:08:09.422 "config": [ 00:08:09.422 { 00:08:09.422 "params": { 00:08:09.422 "trtype": "pcie", 00:08:09.422 "traddr": "0000:00:10.0", 00:08:09.422 "name": "Nvme0" 00:08:09.422 }, 00:08:09.422 "method": "bdev_nvme_attach_controller" 00:08:09.422 }, 00:08:09.422 { 00:08:09.422 "method": "bdev_wait_for_examine" 00:08:09.422 } 00:08:09.422 ] 00:08:09.422 } 00:08:09.422 ] 00:08:09.422 } 00:08:09.422 [2024-07-23 22:09:41.402782] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:09.422 [2024-07-23 22:09:41.402916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76181 ] 00:08:09.422 [2024-07-23 22:09:41.529213] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:09.422 [2024-07-23 22:09:41.547947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.422 [2024-07-23 22:09:41.596975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.680 [2024-07-23 22:09:41.638315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:09.937  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:09.937 00:08:09.937 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:09.937 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:09.937 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:09.938 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:09.938 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:09.938 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:09.938 22:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:10.503 22:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:10.503 22:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:10.503 22:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:10.503 22:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:10.503 [2024-07-23 22:09:42.436911] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:10.503 [2024-07-23 22:09:42.436986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76200 ] 00:08:10.503 { 00:08:10.503 "subsystems": [ 00:08:10.503 { 00:08:10.503 "subsystem": "bdev", 00:08:10.503 "config": [ 00:08:10.503 { 00:08:10.503 "params": { 00:08:10.503 "trtype": "pcie", 00:08:10.503 "traddr": "0000:00:10.0", 00:08:10.503 "name": "Nvme0" 00:08:10.503 }, 00:08:10.503 "method": "bdev_nvme_attach_controller" 00:08:10.503 }, 00:08:10.503 { 00:08:10.503 "method": "bdev_wait_for_examine" 00:08:10.503 } 00:08:10.503 ] 00:08:10.503 } 00:08:10.503 ] 00:08:10.503 } 00:08:10.503 [2024-07-23 22:09:42.553850] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.504 [2024-07-23 22:09:42.567880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.504 [2024-07-23 22:09:42.616799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.504 [2024-07-23 22:09:42.658236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:10.762  Copying: 56/56 [kB] (average 54 MBps) 00:08:10.762 00:08:10.762 22:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:10.762 22:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:10.762 22:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:10.762 22:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.021 [2024-07-23 22:09:42.964885] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:11.021 [2024-07-23 22:09:42.965181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76214 ] 00:08:11.021 { 00:08:11.021 "subsystems": [ 00:08:11.021 { 00:08:11.021 "subsystem": "bdev", 00:08:11.021 "config": [ 00:08:11.021 { 00:08:11.021 "params": { 00:08:11.021 "trtype": "pcie", 00:08:11.021 "traddr": "0000:00:10.0", 00:08:11.021 "name": "Nvme0" 00:08:11.021 }, 00:08:11.021 "method": "bdev_nvme_attach_controller" 00:08:11.021 }, 00:08:11.021 { 00:08:11.021 "method": "bdev_wait_for_examine" 00:08:11.021 } 00:08:11.021 ] 00:08:11.021 } 00:08:11.021 ] 00:08:11.021 } 00:08:11.021 [2024-07-23 22:09:43.082263] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.021 [2024-07-23 22:09:43.096037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.021 [2024-07-23 22:09:43.145495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.021 [2024-07-23 22:09:43.186963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:11.280  Copying: 56/56 [kB] (average 54 MBps) 00:08:11.280 00:08:11.280 22:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.280 22:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:11.280 22:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:11.280 22:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:11.280 22:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:11.280 22:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:11.280 22:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:11.280 22:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:11.280 22:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:11.280 22:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:11.280 22:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.538 { 00:08:11.538 "subsystems": [ 00:08:11.538 { 00:08:11.538 "subsystem": "bdev", 00:08:11.538 "config": [ 00:08:11.538 { 00:08:11.538 "params": { 00:08:11.538 "trtype": "pcie", 00:08:11.538 "traddr": "0000:00:10.0", 00:08:11.538 "name": "Nvme0" 00:08:11.538 }, 00:08:11.538 "method": "bdev_nvme_attach_controller" 00:08:11.538 }, 00:08:11.538 { 00:08:11.538 "method": "bdev_wait_for_examine" 00:08:11.538 } 00:08:11.538 ] 00:08:11.538 } 00:08:11.538 ] 00:08:11.538 } 00:08:11.538 [2024-07-23 22:09:43.520571] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:11.538 [2024-07-23 22:09:43.520672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76229 ] 00:08:11.538 [2024-07-23 22:09:43.646701] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.538 [2024-07-23 22:09:43.663736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.538 [2024-07-23 22:09:43.712250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.795 [2024-07-23 22:09:43.753665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.053  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:12.053 00:08:12.053 22:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:12.053 22:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:12.053 22:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:12.053 22:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:12.053 22:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:12.053 22:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:12.053 22:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:12.053 22:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:12.310 22:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:12.310 22:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:12.310 22:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:12.310 22:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:12.567 [2024-07-23 22:09:44.521796] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:12.568 [2024-07-23 22:09:44.522288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76248 ] 00:08:12.568 { 00:08:12.568 "subsystems": [ 00:08:12.568 { 00:08:12.568 "subsystem": "bdev", 00:08:12.568 "config": [ 00:08:12.568 { 00:08:12.568 "params": { 00:08:12.568 "trtype": "pcie", 00:08:12.568 "traddr": "0000:00:10.0", 00:08:12.568 "name": "Nvme0" 00:08:12.568 }, 00:08:12.568 "method": "bdev_nvme_attach_controller" 00:08:12.568 }, 00:08:12.568 { 00:08:12.568 "method": "bdev_wait_for_examine" 00:08:12.568 } 00:08:12.568 ] 00:08:12.568 } 00:08:12.568 ] 00:08:12.568 } 00:08:12.568 [2024-07-23 22:09:44.650471] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:12.568 [2024-07-23 22:09:44.664645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.568 [2024-07-23 22:09:44.713685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.568 [2024-07-23 22:09:44.755032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.825  Copying: 48/48 [kB] (average 46 MBps) 00:08:12.825 00:08:13.082 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:13.082 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:13.082 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:13.082 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:13.082 { 00:08:13.082 "subsystems": [ 00:08:13.082 { 00:08:13.082 "subsystem": "bdev", 00:08:13.082 "config": [ 00:08:13.082 { 00:08:13.082 "params": { 00:08:13.082 "trtype": "pcie", 00:08:13.082 "traddr": "0000:00:10.0", 00:08:13.082 "name": "Nvme0" 00:08:13.082 }, 00:08:13.082 "method": "bdev_nvme_attach_controller" 00:08:13.083 }, 00:08:13.083 { 00:08:13.083 "method": "bdev_wait_for_examine" 00:08:13.083 } 00:08:13.083 ] 00:08:13.083 } 00:08:13.083 ] 00:08:13.083 } 00:08:13.083 [2024-07-23 22:09:45.080923] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:13.083 [2024-07-23 22:09:45.081042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76262 ] 00:08:13.083 [2024-07-23 22:09:45.207422] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:13.083 [2024-07-23 22:09:45.220155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.083 [2024-07-23 22:09:45.269128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.340 [2024-07-23 22:09:45.310571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.597  Copying: 48/48 [kB] (average 46 MBps) 00:08:13.597 00:08:13.597 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.597 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:13.597 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:13.597 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:13.597 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:13.597 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:13.597 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:13.598 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:13.598 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:13.598 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:13.598 22:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:13.598 [2024-07-23 22:09:45.630986] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:13.598 [2024-07-23 22:09:45.631060] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76277 ] 00:08:13.598 { 00:08:13.598 "subsystems": [ 00:08:13.598 { 00:08:13.598 "subsystem": "bdev", 00:08:13.598 "config": [ 00:08:13.598 { 00:08:13.598 "params": { 00:08:13.598 "trtype": "pcie", 00:08:13.598 "traddr": "0000:00:10.0", 00:08:13.598 "name": "Nvme0" 00:08:13.598 }, 00:08:13.598 "method": "bdev_nvme_attach_controller" 00:08:13.598 }, 00:08:13.598 { 00:08:13.598 "method": "bdev_wait_for_examine" 00:08:13.598 } 00:08:13.598 ] 00:08:13.598 } 00:08:13.598 ] 00:08:13.598 } 00:08:13.598 [2024-07-23 22:09:45.749043] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:13.598 [2024-07-23 22:09:45.764826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.856 [2024-07-23 22:09:45.813296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.856 [2024-07-23 22:09:45.854580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.113  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:14.113 00:08:14.113 22:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:14.113 22:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:14.113 22:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:14.113 22:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:14.113 22:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:14.113 22:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:14.113 22:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:14.407 22:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:14.407 22:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:14.407 22:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:14.407 22:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:14.665 { 00:08:14.665 "subsystems": [ 00:08:14.665 { 00:08:14.665 "subsystem": "bdev", 00:08:14.665 "config": [ 00:08:14.665 { 00:08:14.665 "params": { 00:08:14.665 "trtype": "pcie", 00:08:14.665 "traddr": "0000:00:10.0", 00:08:14.665 "name": "Nvme0" 00:08:14.665 }, 00:08:14.665 "method": "bdev_nvme_attach_controller" 00:08:14.665 }, 00:08:14.665 { 00:08:14.665 "method": "bdev_wait_for_examine" 00:08:14.665 } 00:08:14.665 ] 00:08:14.665 } 00:08:14.665 ] 00:08:14.665 } 00:08:14.665 [2024-07-23 22:09:46.630845] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:14.665 [2024-07-23 22:09:46.631165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76296 ] 00:08:14.665 [2024-07-23 22:09:46.757981] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:14.665 [2024-07-23 22:09:46.777446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.665 [2024-07-23 22:09:46.826337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.923 [2024-07-23 22:09:46.867765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:15.181  Copying: 48/48 [kB] (average 46 MBps) 00:08:15.181 00:08:15.181 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:15.181 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:15.181 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:15.181 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:15.181 [2024-07-23 22:09:47.195595] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:15.181 [2024-07-23 22:09:47.195700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76310 ] 00:08:15.181 { 00:08:15.181 "subsystems": [ 00:08:15.181 { 00:08:15.181 "subsystem": "bdev", 00:08:15.181 "config": [ 00:08:15.181 { 00:08:15.181 "params": { 00:08:15.181 "trtype": "pcie", 00:08:15.181 "traddr": "0000:00:10.0", 00:08:15.181 "name": "Nvme0" 00:08:15.181 }, 00:08:15.181 "method": "bdev_nvme_attach_controller" 00:08:15.181 }, 00:08:15.181 { 00:08:15.181 "method": "bdev_wait_for_examine" 00:08:15.181 } 00:08:15.181 ] 00:08:15.181 } 00:08:15.181 ] 00:08:15.181 } 00:08:15.181 [2024-07-23 22:09:47.321958] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:15.181 [2024-07-23 22:09:47.340292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.439 [2024-07-23 22:09:47.389050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.439 [2024-07-23 22:09:47.430584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:15.697  Copying: 48/48 [kB] (average 46 MBps) 00:08:15.697 00:08:15.697 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.697 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:15.697 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:15.697 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:15.697 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:15.697 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:15.697 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:15.697 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:15.697 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:15.697 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:15.697 22:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:15.697 { 00:08:15.697 "subsystems": [ 00:08:15.697 { 00:08:15.697 "subsystem": "bdev", 00:08:15.697 "config": [ 00:08:15.697 { 00:08:15.697 "params": { 00:08:15.697 "trtype": "pcie", 00:08:15.697 "traddr": "0000:00:10.0", 00:08:15.697 "name": "Nvme0" 00:08:15.697 }, 00:08:15.697 "method": "bdev_nvme_attach_controller" 00:08:15.697 }, 00:08:15.697 { 00:08:15.697 "method": "bdev_wait_for_examine" 00:08:15.697 } 00:08:15.697 ] 00:08:15.697 } 00:08:15.697 ] 00:08:15.697 } 00:08:15.697 [2024-07-23 22:09:47.757766] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:15.697 [2024-07-23 22:09:47.757898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76330 ] 00:08:15.697 [2024-07-23 22:09:47.884629] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:15.955 [2024-07-23 22:09:47.900488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.955 [2024-07-23 22:09:47.949039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.955 [2024-07-23 22:09:47.990358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:16.213  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:16.213 00:08:16.213 00:08:16.213 real 0m13.189s 00:08:16.213 user 0m9.228s 00:08:16.213 sys 0m4.963s 00:08:16.213 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.213 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:16.213 ************************************ 00:08:16.213 END TEST dd_rw 00:08:16.213 ************************************ 00:08:16.213 22:09:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:16.213 22:09:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:16.214 22:09:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.214 22:09:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:16.214 ************************************ 00:08:16.214 START TEST dd_rw_offset 00:08:16.214 ************************************ 00:08:16.214 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:08:16.214 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:16.214 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:16.214 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:16.214 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:16.214 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:16.214 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=2x0zjtg5ranjkn2qj3fs31c9i77wmwri8d2laetpfel2zwpymiqsne6a4a3adncg7k0cpnhjqckydd4fbyr4sl5qez0pv1zoqlsqlhoo8gewc0hkxkivbv34c8v5smjze5wxodhjm8wrt22gdbtlflp0k5chm9c6pw5o933bv5m04yguywr9yc85tfo0knrz0dcqudwqfrmj6w4kogyrvkbig94nlonzc67w6ta414qdn6xyu8yvts8zujw26crf2wz9snmizm2ok8snrtocyvmsbv44gsv0wgmtkq8ikomymvslzthclq0aukxyu79dsa8fzndyazgejeek029yg2et3ij9w9zn9hawa6im85ppfsu1jzp0e7a5ii167ac2qxzfysc97nwfp97gpufmjhhoxbzm64jntq3dlczka50vxrb3nhv1v8pmt2mwahuw8uf988f2ucf2bx21jkja5s5dnbiwi31jda26swr4eg3wygfgv5hj3z1ckfw9fbb7lxjq1p7w66ax1qmkrai9or3fsd48fsztt6m8z79qund1y6mrtmjepgnuilj8sarpnsg3g3bougag4aof7q85yaxoph3ofehuwsbxo66811nsqjt9wmk45npmysc7t513cvec1hceccwme1om745ioj6zmsm0db2jfp0s8yaxbakqbfgm837322kpq8t14dpxohd7i51bcbeegmxho7vzm4rv55dnhok7z0ka0i7sadk9omst9tnagevu2c7lgmz9hbo4yetc89medy623c03usxbwmpnymozwdzh9exm8gz8begk2nkhgu38rhi7lu8q80wlwkb33th8hsh542q0p8dw78jlqlv463r10d2nmcd76pltvy2la0q6rliomjfc3p3f16px5xye1pnyddire5cn8nahoohoquenovxo2reg1n3fa6rzjrxiv7aadxf9jlbx36nixxn38ea2bxr145jzp4c98md4mwuo73yrxvfjpan17r0t6xtls6e4epspc6zi4ebko54jac5ilibsxu2suenyopoddmzcwdak7pxffpda32z3oea5stte6etl8f2zrdys0wt9n3jf56f383q5oj0gnevdrybepw247wh3scv3m3dd8xultew7725nqp3r8zbec0szdgeg77enqopzo410enz1xluxtnnigg6cirht2f1dgd1fq0uzkg7mmxro8f481xewoam7h8ikj0giss7sk22c180x1pvue7oloh9laroeup6e7rbq84ealxsw1auk7eglqpqyd4nbmov6c3335exu63fqxecs62xtcfxyhj9w6nqot03grutvkbh19qfjojlekxewpx6wia3y0cwoxxseroqtvq2gz8cwj39624hy1qkyzhr5f390wftwb4be6mz6f7b7avyoi7l2gaiz6vpsjtzq6q1ogmxaq8q3c9lexn13iag7zo63xckbxhqeesm7ws7avw3je9cnk8tncb7jbwuzjmu1aq5cetf02qmcq4kn8w2fjue42jpss9q79csky1569f3kuix8vo0xnm2cmwbyz8tgv790le9902k3bh41cyhihctstgssk7329dcwl4qqxl3pwsyv7whquldfguqg8tly5ru8am82xfwphwf3ip49agosqwcw1eo70xlawzcdtapc2jztrfygoyi9cw2s16g7utd78rarvfmdtubyalx0yiu54556050ckjw571bw4ebftw7kh4mmgz6sbv8tw5yuswwvjf9f8xfa5adqp23dop2y5z3eh8dpqd5y47fmlg8idopmfbgviqt1d36y2mlto2xtqzgbw1k0uedon3jom39q1qkco05gaqj9py5ji4lcqexsqnvteal2nkp1wj0oavf2kmd93tkytyglqjegjhnco0rxzq1rcf2hm3xy5vf20q15aeagbdlypqtoz9efptqkjl0ztgdqefw9mcpjcy2rxid8itbwez7ws8ja1i6yasxq6rtxr2ieaxorocqoycfwueeh2bsurulcnmsj9t0i6rcca9vcun6jsx0yk5ragpqcp8s3re7rawxpmn76jktleww1somig3ozrl0lq4iemia7837unfk2e1zqfg4h6e6ezbpsj1f980fbfwtatdhg3a55pdhppsnfpuun766cghshucdrnm9ealy6zcb911qvhlppsn7dkpj2yl72wbda7ysn83hdizhp1yd47pe7ur1inb14npiofvvf2vd4efq4upwqmxf7j061s9ty8nfmyfk0ec43h5b7vwqi6ynei4ir1qv187pohp4ci03sdk693276xy8bojwpr683dw9d17c1nn4b84cwqme3gm9u5xhylo6d07ltfzp91ipe71alhtrsz6eaornfwru8ltjvxlt6wdwomc9euhqbbauxvpepm7b8n5y0wjqsvt9w78hror542mrubh2p3ry08sjgin3foedrfgmbpj13sdrj24atadoi3iaflp3srz0sr4wuxbzu5st5nvbupx5qsu1fiabj3pe0fygv6jb1rocxr8w3up2j1qxqt3yf7e8olchm1kh5p8ar97zzc65tnzvhi7j3gwfbzz4gzx045gnfvf4it3f0qa5vrb5oxme49lvfonze2juy703lqtsw5rt4cihw7devf4ipillrde8m514fy1z91y9640j7loaph2eknvk5baa6mk8w0sfe3uhjc9oclc85ibl1l7knpsa1jykd5j1guglw48d6rfwalso3i69x8tr1b8hc2wriqkbcar37phct2m8xygiqptqrzt0asiozbgmefk5t8pv0t0nkct3kzp9we6o2tkxqz3rlf7st4uzw3d2cm336t0bqcbwoq5rjp7cgwnwt50ata08nx6yixyfbz0wys5817v7a5mhzf36eu9mx4pw9y5y3wszq4yo77uc7pq661et9nsen7vwynbaaavt20snbe6ldnjd5173rcfivm8ziaclu2ajahycla6uao7mzjurcjdmg23vuvylx38h62r73vw5d0qp1ufvsww72ha56gqf89p0u673da62arpgjb6hffru3zbsr9uc093j9k7a03rggs994ifh29baq0aezz4gc9nqqfq6u399is5u6rhu8m6w6z2p3lxkbm42cufqhy18mc87e5e0n7igho27v2m5k6hzv19hs7exjnq6ztwxoar9pwpn1n556l7tly5xl8hewextic44p8680xlcruj9iauq3no14se9vvsh37mnwl5uchsu0ffvbqm1fdwckoc49vr2owkmdaa46hzoy2jhm30h1qh4znx0oxv4ot64a1inveez1rez3ork7wxgwrwetf1pfzb0kbtv1hmjpkpb9o0wpnvzh51bkcff7yidrdzr57bojt4idf11k31he9rtkjfe86ckc4jhe5o2immr9u2gpjtx98mk3s79s7j8flbjiwnzg7jzv9clz04w2fesmbv6x19kvlwrwfymvsfen82aye0kuj313v7k6obn74k88nvh3mwzvttd9b37abw40aso440idpkcv77z8m8tzlsr62omjgluludibyex1nm206i11psbg21wloj0ngc7kizv9tsdv8we1kx5pu3n016pg5ngpa73b3rrungprxtvfvwodrt8g2t7ju3oztbmdhhgcwxmrh5y0qsmlmv55clrynpow51pqlddmfcmp63x81kj0653443dedf5fpiiuwzdhaf3cbqv1fp6s2olf7h86dv9qzyb513whw76bpah18movj1c6f5ptlh1i2dtry8wxqbz63vj1o2kbys2lg71b8ph7i2rox4d34xmdij6xal4vl3ncabtujm6ee0nvj8lxhfx8hdi7cg5viprf446mnyghj1b1abqe9h741cnl3r58oo6rwt5hcpn5s4qq1z3nstfdt49r00v4h1ljypfp5kop0aj7j4c8ugg64qokjrxa8ix7van0b5x5v88748bnf0alm91jorja4yq9ssqcg48cwlzlt27yg8dt7f5bcn22o7vtzltkmpe546e6c6pj99cn0p820mnhix84 00:08:16.214 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:16.214 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:16.214 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:16.214 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:16.473 { 00:08:16.473 "subsystems": [ 00:08:16.473 { 00:08:16.473 "subsystem": "bdev", 00:08:16.473 "config": [ 00:08:16.473 { 00:08:16.473 "params": { 00:08:16.473 "trtype": "pcie", 00:08:16.473 "traddr": "0000:00:10.0", 00:08:16.473 "name": "Nvme0" 00:08:16.473 }, 00:08:16.473 "method": "bdev_nvme_attach_controller" 00:08:16.473 }, 00:08:16.473 { 00:08:16.473 "method": "bdev_wait_for_examine" 00:08:16.473 } 00:08:16.473 ] 00:08:16.473 } 00:08:16.473 ] 00:08:16.473 } 00:08:16.473 [2024-07-23 22:09:48.432533] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:16.473 [2024-07-23 22:09:48.432651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76360 ] 00:08:16.473 [2024-07-23 22:09:48.559677] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.473 [2024-07-23 22:09:48.577457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.473 [2024-07-23 22:09:48.626149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.731 [2024-07-23 22:09:48.667493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:16.990  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:16.990 00:08:16.990 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:16.990 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:16.990 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:16.990 22:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:16.990 { 00:08:16.990 "subsystems": [ 00:08:16.990 { 00:08:16.990 "subsystem": "bdev", 00:08:16.990 "config": [ 00:08:16.990 { 00:08:16.990 "params": { 00:08:16.990 "trtype": "pcie", 00:08:16.990 "traddr": "0000:00:10.0", 00:08:16.990 "name": "Nvme0" 00:08:16.990 }, 00:08:16.990 "method": "bdev_nvme_attach_controller" 00:08:16.990 }, 00:08:16.990 { 00:08:16.990 "method": "bdev_wait_for_examine" 00:08:16.990 } 00:08:16.990 ] 00:08:16.990 } 00:08:16.990 ] 00:08:16.990 } 00:08:16.990 [2024-07-23 22:09:48.995019] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:16.990 [2024-07-23 22:09:48.995374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76375 ] 00:08:16.990 [2024-07-23 22:09:49.121523] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.990 [2024-07-23 22:09:49.140080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.248 [2024-07-23 22:09:49.188774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.248 [2024-07-23 22:09:49.230174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:17.508  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:17.508 00:08:17.508 22:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:17.508 ************************************ 00:08:17.508 END TEST dd_rw_offset 00:08:17.508 ************************************ 00:08:17.509 22:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 2x0zjtg5ranjkn2qj3fs31c9i77wmwri8d2laetpfel2zwpymiqsne6a4a3adncg7k0cpnhjqckydd4fbyr4sl5qez0pv1zoqlsqlhoo8gewc0hkxkivbv34c8v5smjze5wxodhjm8wrt22gdbtlflp0k5chm9c6pw5o933bv5m04yguywr9yc85tfo0knrz0dcqudwqfrmj6w4kogyrvkbig94nlonzc67w6ta414qdn6xyu8yvts8zujw26crf2wz9snmizm2ok8snrtocyvmsbv44gsv0wgmtkq8ikomymvslzthclq0aukxyu79dsa8fzndyazgejeek029yg2et3ij9w9zn9hawa6im85ppfsu1jzp0e7a5ii167ac2qxzfysc97nwfp97gpufmjhhoxbzm64jntq3dlczka50vxrb3nhv1v8pmt2mwahuw8uf988f2ucf2bx21jkja5s5dnbiwi31jda26swr4eg3wygfgv5hj3z1ckfw9fbb7lxjq1p7w66ax1qmkrai9or3fsd48fsztt6m8z79qund1y6mrtmjepgnuilj8sarpnsg3g3bougag4aof7q85yaxoph3ofehuwsbxo66811nsqjt9wmk45npmysc7t513cvec1hceccwme1om745ioj6zmsm0db2jfp0s8yaxbakqbfgm837322kpq8t14dpxohd7i51bcbeegmxho7vzm4rv55dnhok7z0ka0i7sadk9omst9tnagevu2c7lgmz9hbo4yetc89medy623c03usxbwmpnymozwdzh9exm8gz8begk2nkhgu38rhi7lu8q80wlwkb33th8hsh542q0p8dw78jlqlv463r10d2nmcd76pltvy2la0q6rliomjfc3p3f16px5xye1pnyddire5cn8nahoohoquenovxo2reg1n3fa6rzjrxiv7aadxf9jlbx36nixxn38ea2bxr145jzp4c98md4mwuo73yrxvfjpan17r0t6xtls6e4epspc6zi4ebko54jac5ilibsxu2suenyopoddmzcwdak7pxffpda32z3oea5stte6etl8f2zrdys0wt9n3jf56f383q5oj0gnevdrybepw247wh3scv3m3dd8xultew7725nqp3r8zbec0szdgeg77enqopzo410enz1xluxtnnigg6cirht2f1dgd1fq0uzkg7mmxro8f481xewoam7h8ikj0giss7sk22c180x1pvue7oloh9laroeup6e7rbq84ealxsw1auk7eglqpqyd4nbmov6c3335exu63fqxecs62xtcfxyhj9w6nqot03grutvkbh19qfjojlekxewpx6wia3y0cwoxxseroqtvq2gz8cwj39624hy1qkyzhr5f390wftwb4be6mz6f7b7avyoi7l2gaiz6vpsjtzq6q1ogmxaq8q3c9lexn13iag7zo63xckbxhqeesm7ws7avw3je9cnk8tncb7jbwuzjmu1aq5cetf02qmcq4kn8w2fjue42jpss9q79csky1569f3kuix8vo0xnm2cmwbyz8tgv790le9902k3bh41cyhihctstgssk7329dcwl4qqxl3pwsyv7whquldfguqg8tly5ru8am82xfwphwf3ip49agosqwcw1eo70xlawzcdtapc2jztrfygoyi9cw2s16g7utd78rarvfmdtubyalx0yiu54556050ckjw571bw4ebftw7kh4mmgz6sbv8tw5yuswwvjf9f8xfa5adqp23dop2y5z3eh8dpqd5y47fmlg8idopmfbgviqt1d36y2mlto2xtqzgbw1k0uedon3jom39q1qkco05gaqj9py5ji4lcqexsqnvteal2nkp1wj0oavf2kmd93tkytyglqjegjhnco0rxzq1rcf2hm3xy5vf20q15aeagbdlypqtoz9efptqkjl0ztgdqefw9mcpjcy2rxid8itbwez7ws8ja1i6yasxq6rtxr2ieaxorocqoycfwueeh2bsurulcnmsj9t0i6rcca9vcun6jsx0yk5ragpqcp8s3re7rawxpmn76jktleww1somig3ozrl0lq4iemia7837unfk2e1zqfg4h6e6ezbpsj1f980fbfwtatdhg3a55pdhppsnfpuun766cghshucdrnm9ealy6zcb911qvhlppsn7dkpj2yl72wbda7ysn83hdizhp1yd47pe7ur1inb14npiofvvf2vd4efq4upwqmxf7j061s9ty8nfmyfk0ec43h5b7vwqi6ynei4ir1qv187pohp4ci03sdk693276xy8bojwpr683dw9d17c1nn4b84cwqme3gm9u5xhylo6d07ltfzp91ipe71alhtrsz6eaornfwru8ltjvxlt6wdwomc9euhqbbauxvpepm7b8n5y0wjqsvt9w78hror542mrubh2p3ry08sjgin3foedrfgmbpj13sdrj24atadoi3iaflp3srz0sr4wuxbzu5st5nvbupx5qsu1fiabj3pe0fygv6jb1rocxr8w3up2j1qxqt3yf7e8olchm1kh5p8ar97zzc65tnzvhi7j3gwfbzz4gzx045gnfvf4it3f0qa5vrb5oxme49lvfonze2juy703lqtsw5rt4cihw7devf4ipillrde8m514fy1z91y9640j7loaph2eknvk5baa6mk8w0sfe3uhjc9oclc85ibl1l7knpsa1jykd5j1guglw48d6rfwalso3i69x8tr1b8hc2wriqkbcar37phct2m8xygiqptqrzt0asiozbgmefk5t8pv0t0nkct3kzp9we6o2tkxqz3rlf7st4uzw3d2cm336t0bqcbwoq5rjp7cgwnwt50ata08nx6yixyfbz0wys5817v7a5mhzf36eu9mx4pw9y5y3wszq4yo77uc7pq661et9nsen7vwynbaaavt20snbe6ldnjd5173rcfivm8ziaclu2ajahycla6uao7mzjurcjdmg23vuvylx38h62r73vw5d0qp1ufvsww72ha56gqf89p0u673da62arpgjb6hffru3zbsr9uc093j9k7a03rggs994ifh29baq0aezz4gc9nqqfq6u399is5u6rhu8m6w6z2p3lxkbm42cufqhy18mc87e5e0n7igho27v2m5k6hzv19hs7exjnq6ztwxoar9pwpn1n556l7tly5xl8hewextic44p8680xlcruj9iauq3no14se9vvsh37mnwl5uchsu0ffvbqm1fdwckoc49vr2owkmdaa46hzoy2jhm30h1qh4znx0oxv4ot64a1inveez1rez3ork7wxgwrwetf1pfzb0kbtv1hmjpkpb9o0wpnvzh51bkcff7yidrdzr57bojt4idf11k31he9rtkjfe86ckc4jhe5o2immr9u2gpjtx98mk3s79s7j8flbjiwnzg7jzv9clz04w2fesmbv6x19kvlwrwfymvsfen82aye0kuj313v7k6obn74k88nvh3mwzvttd9b37abw40aso440idpkcv77z8m8tzlsr62omjgluludibyex1nm206i11psbg21wloj0ngc7kizv9tsdv8we1kx5pu3n016pg5ngpa73b3rrungprxtvfvwodrt8g2t7ju3oztbmdhhgcwxmrh5y0qsmlmv55clrynpow51pqlddmfcmp63x81kj0653443dedf5fpiiuwzdhaf3cbqv1fp6s2olf7h86dv9qzyb513whw76bpah18movj1c6f5ptlh1i2dtry8wxqbz63vj1o2kbys2lg71b8ph7i2rox4d34xmdij6xal4vl3ncabtujm6ee0nvj8lxhfx8hdi7cg5viprf446mnyghj1b1abqe9h741cnl3r58oo6rwt5hcpn5s4qq1z3nstfdt49r00v4h1ljypfp5kop0aj7j4c8ugg64qokjrxa8ix7van0b5x5v88748bnf0alm91jorja4yq9ssqcg48cwlzlt27yg8dt7f5bcn22o7vtzltkmpe546e6c6pj99cn0p820mnhix84 == \2\x\0\z\j\t\g\5\r\a\n\j\k\n\2\q\j\3\f\s\3\1\c\9\i\7\7\w\m\w\r\i\8\d\2\l\a\e\t\p\f\e\l\2\z\w\p\y\m\i\q\s\n\e\6\a\4\a\3\a\d\n\c\g\7\k\0\c\p\n\h\j\q\c\k\y\d\d\4\f\b\y\r\4\s\l\5\q\e\z\0\p\v\1\z\o\q\l\s\q\l\h\o\o\8\g\e\w\c\0\h\k\x\k\i\v\b\v\3\4\c\8\v\5\s\m\j\z\e\5\w\x\o\d\h\j\m\8\w\r\t\2\2\g\d\b\t\l\f\l\p\0\k\5\c\h\m\9\c\6\p\w\5\o\9\3\3\b\v\5\m\0\4\y\g\u\y\w\r\9\y\c\8\5\t\f\o\0\k\n\r\z\0\d\c\q\u\d\w\q\f\r\m\j\6\w\4\k\o\g\y\r\v\k\b\i\g\9\4\n\l\o\n\z\c\6\7\w\6\t\a\4\1\4\q\d\n\6\x\y\u\8\y\v\t\s\8\z\u\j\w\2\6\c\r\f\2\w\z\9\s\n\m\i\z\m\2\o\k\8\s\n\r\t\o\c\y\v\m\s\b\v\4\4\g\s\v\0\w\g\m\t\k\q\8\i\k\o\m\y\m\v\s\l\z\t\h\c\l\q\0\a\u\k\x\y\u\7\9\d\s\a\8\f\z\n\d\y\a\z\g\e\j\e\e\k\0\2\9\y\g\2\e\t\3\i\j\9\w\9\z\n\9\h\a\w\a\6\i\m\8\5\p\p\f\s\u\1\j\z\p\0\e\7\a\5\i\i\1\6\7\a\c\2\q\x\z\f\y\s\c\9\7\n\w\f\p\9\7\g\p\u\f\m\j\h\h\o\x\b\z\m\6\4\j\n\t\q\3\d\l\c\z\k\a\5\0\v\x\r\b\3\n\h\v\1\v\8\p\m\t\2\m\w\a\h\u\w\8\u\f\9\8\8\f\2\u\c\f\2\b\x\2\1\j\k\j\a\5\s\5\d\n\b\i\w\i\3\1\j\d\a\2\6\s\w\r\4\e\g\3\w\y\g\f\g\v\5\h\j\3\z\1\c\k\f\w\9\f\b\b\7\l\x\j\q\1\p\7\w\6\6\a\x\1\q\m\k\r\a\i\9\o\r\3\f\s\d\4\8\f\s\z\t\t\6\m\8\z\7\9\q\u\n\d\1\y\6\m\r\t\m\j\e\p\g\n\u\i\l\j\8\s\a\r\p\n\s\g\3\g\3\b\o\u\g\a\g\4\a\o\f\7\q\8\5\y\a\x\o\p\h\3\o\f\e\h\u\w\s\b\x\o\6\6\8\1\1\n\s\q\j\t\9\w\m\k\4\5\n\p\m\y\s\c\7\t\5\1\3\c\v\e\c\1\h\c\e\c\c\w\m\e\1\o\m\7\4\5\i\o\j\6\z\m\s\m\0\d\b\2\j\f\p\0\s\8\y\a\x\b\a\k\q\b\f\g\m\8\3\7\3\2\2\k\p\q\8\t\1\4\d\p\x\o\h\d\7\i\5\1\b\c\b\e\e\g\m\x\h\o\7\v\z\m\4\r\v\5\5\d\n\h\o\k\7\z\0\k\a\0\i\7\s\a\d\k\9\o\m\s\t\9\t\n\a\g\e\v\u\2\c\7\l\g\m\z\9\h\b\o\4\y\e\t\c\8\9\m\e\d\y\6\2\3\c\0\3\u\s\x\b\w\m\p\n\y\m\o\z\w\d\z\h\9\e\x\m\8\g\z\8\b\e\g\k\2\n\k\h\g\u\3\8\r\h\i\7\l\u\8\q\8\0\w\l\w\k\b\3\3\t\h\8\h\s\h\5\4\2\q\0\p\8\d\w\7\8\j\l\q\l\v\4\6\3\r\1\0\d\2\n\m\c\d\7\6\p\l\t\v\y\2\l\a\0\q\6\r\l\i\o\m\j\f\c\3\p\3\f\1\6\p\x\5\x\y\e\1\p\n\y\d\d\i\r\e\5\c\n\8\n\a\h\o\o\h\o\q\u\e\n\o\v\x\o\2\r\e\g\1\n\3\f\a\6\r\z\j\r\x\i\v\7\a\a\d\x\f\9\j\l\b\x\3\6\n\i\x\x\n\3\8\e\a\2\b\x\r\1\4\5\j\z\p\4\c\9\8\m\d\4\m\w\u\o\7\3\y\r\x\v\f\j\p\a\n\1\7\r\0\t\6\x\t\l\s\6\e\4\e\p\s\p\c\6\z\i\4\e\b\k\o\5\4\j\a\c\5\i\l\i\b\s\x\u\2\s\u\e\n\y\o\p\o\d\d\m\z\c\w\d\a\k\7\p\x\f\f\p\d\a\3\2\z\3\o\e\a\5\s\t\t\e\6\e\t\l\8\f\2\z\r\d\y\s\0\w\t\9\n\3\j\f\5\6\f\3\8\3\q\5\o\j\0\g\n\e\v\d\r\y\b\e\p\w\2\4\7\w\h\3\s\c\v\3\m\3\d\d\8\x\u\l\t\e\w\7\7\2\5\n\q\p\3\r\8\z\b\e\c\0\s\z\d\g\e\g\7\7\e\n\q\o\p\z\o\4\1\0\e\n\z\1\x\l\u\x\t\n\n\i\g\g\6\c\i\r\h\t\2\f\1\d\g\d\1\f\q\0\u\z\k\g\7\m\m\x\r\o\8\f\4\8\1\x\e\w\o\a\m\7\h\8\i\k\j\0\g\i\s\s\7\s\k\2\2\c\1\8\0\x\1\p\v\u\e\7\o\l\o\h\9\l\a\r\o\e\u\p\6\e\7\r\b\q\8\4\e\a\l\x\s\w\1\a\u\k\7\e\g\l\q\p\q\y\d\4\n\b\m\o\v\6\c\3\3\3\5\e\x\u\6\3\f\q\x\e\c\s\6\2\x\t\c\f\x\y\h\j\9\w\6\n\q\o\t\0\3\g\r\u\t\v\k\b\h\1\9\q\f\j\o\j\l\e\k\x\e\w\p\x\6\w\i\a\3\y\0\c\w\o\x\x\s\e\r\o\q\t\v\q\2\g\z\8\c\w\j\3\9\6\2\4\h\y\1\q\k\y\z\h\r\5\f\3\9\0\w\f\t\w\b\4\b\e\6\m\z\6\f\7\b\7\a\v\y\o\i\7\l\2\g\a\i\z\6\v\p\s\j\t\z\q\6\q\1\o\g\m\x\a\q\8\q\3\c\9\l\e\x\n\1\3\i\a\g\7\z\o\6\3\x\c\k\b\x\h\q\e\e\s\m\7\w\s\7\a\v\w\3\j\e\9\c\n\k\8\t\n\c\b\7\j\b\w\u\z\j\m\u\1\a\q\5\c\e\t\f\0\2\q\m\c\q\4\k\n\8\w\2\f\j\u\e\4\2\j\p\s\s\9\q\7\9\c\s\k\y\1\5\6\9\f\3\k\u\i\x\8\v\o\0\x\n\m\2\c\m\w\b\y\z\8\t\g\v\7\9\0\l\e\9\9\0\2\k\3\b\h\4\1\c\y\h\i\h\c\t\s\t\g\s\s\k\7\3\2\9\d\c\w\l\4\q\q\x\l\3\p\w\s\y\v\7\w\h\q\u\l\d\f\g\u\q\g\8\t\l\y\5\r\u\8\a\m\8\2\x\f\w\p\h\w\f\3\i\p\4\9\a\g\o\s\q\w\c\w\1\e\o\7\0\x\l\a\w\z\c\d\t\a\p\c\2\j\z\t\r\f\y\g\o\y\i\9\c\w\2\s\1\6\g\7\u\t\d\7\8\r\a\r\v\f\m\d\t\u\b\y\a\l\x\0\y\i\u\5\4\5\5\6\0\5\0\c\k\j\w\5\7\1\b\w\4\e\b\f\t\w\7\k\h\4\m\m\g\z\6\s\b\v\8\t\w\5\y\u\s\w\w\v\j\f\9\f\8\x\f\a\5\a\d\q\p\2\3\d\o\p\2\y\5\z\3\e\h\8\d\p\q\d\5\y\4\7\f\m\l\g\8\i\d\o\p\m\f\b\g\v\i\q\t\1\d\3\6\y\2\m\l\t\o\2\x\t\q\z\g\b\w\1\k\0\u\e\d\o\n\3\j\o\m\3\9\q\1\q\k\c\o\0\5\g\a\q\j\9\p\y\5\j\i\4\l\c\q\e\x\s\q\n\v\t\e\a\l\2\n\k\p\1\w\j\0\o\a\v\f\2\k\m\d\9\3\t\k\y\t\y\g\l\q\j\e\g\j\h\n\c\o\0\r\x\z\q\1\r\c\f\2\h\m\3\x\y\5\v\f\2\0\q\1\5\a\e\a\g\b\d\l\y\p\q\t\o\z\9\e\f\p\t\q\k\j\l\0\z\t\g\d\q\e\f\w\9\m\c\p\j\c\y\2\r\x\i\d\8\i\t\b\w\e\z\7\w\s\8\j\a\1\i\6\y\a\s\x\q\6\r\t\x\r\2\i\e\a\x\o\r\o\c\q\o\y\c\f\w\u\e\e\h\2\b\s\u\r\u\l\c\n\m\s\j\9\t\0\i\6\r\c\c\a\9\v\c\u\n\6\j\s\x\0\y\k\5\r\a\g\p\q\c\p\8\s\3\r\e\7\r\a\w\x\p\m\n\7\6\j\k\t\l\e\w\w\1\s\o\m\i\g\3\o\z\r\l\0\l\q\4\i\e\m\i\a\7\8\3\7\u\n\f\k\2\e\1\z\q\f\g\4\h\6\e\6\e\z\b\p\s\j\1\f\9\8\0\f\b\f\w\t\a\t\d\h\g\3\a\5\5\p\d\h\p\p\s\n\f\p\u\u\n\7\6\6\c\g\h\s\h\u\c\d\r\n\m\9\e\a\l\y\6\z\c\b\9\1\1\q\v\h\l\p\p\s\n\7\d\k\p\j\2\y\l\7\2\w\b\d\a\7\y\s\n\8\3\h\d\i\z\h\p\1\y\d\4\7\p\e\7\u\r\1\i\n\b\1\4\n\p\i\o\f\v\v\f\2\v\d\4\e\f\q\4\u\p\w\q\m\x\f\7\j\0\6\1\s\9\t\y\8\n\f\m\y\f\k\0\e\c\4\3\h\5\b\7\v\w\q\i\6\y\n\e\i\4\i\r\1\q\v\1\8\7\p\o\h\p\4\c\i\0\3\s\d\k\6\9\3\2\7\6\x\y\8\b\o\j\w\p\r\6\8\3\d\w\9\d\1\7\c\1\n\n\4\b\8\4\c\w\q\m\e\3\g\m\9\u\5\x\h\y\l\o\6\d\0\7\l\t\f\z\p\9\1\i\p\e\7\1\a\l\h\t\r\s\z\6\e\a\o\r\n\f\w\r\u\8\l\t\j\v\x\l\t\6\w\d\w\o\m\c\9\e\u\h\q\b\b\a\u\x\v\p\e\p\m\7\b\8\n\5\y\0\w\j\q\s\v\t\9\w\7\8\h\r\o\r\5\4\2\m\r\u\b\h\2\p\3\r\y\0\8\s\j\g\i\n\3\f\o\e\d\r\f\g\m\b\p\j\1\3\s\d\r\j\2\4\a\t\a\d\o\i\3\i\a\f\l\p\3\s\r\z\0\s\r\4\w\u\x\b\z\u\5\s\t\5\n\v\b\u\p\x\5\q\s\u\1\f\i\a\b\j\3\p\e\0\f\y\g\v\6\j\b\1\r\o\c\x\r\8\w\3\u\p\2\j\1\q\x\q\t\3\y\f\7\e\8\o\l\c\h\m\1\k\h\5\p\8\a\r\9\7\z\z\c\6\5\t\n\z\v\h\i\7\j\3\g\w\f\b\z\z\4\g\z\x\0\4\5\g\n\f\v\f\4\i\t\3\f\0\q\a\5\v\r\b\5\o\x\m\e\4\9\l\v\f\o\n\z\e\2\j\u\y\7\0\3\l\q\t\s\w\5\r\t\4\c\i\h\w\7\d\e\v\f\4\i\p\i\l\l\r\d\e\8\m\5\1\4\f\y\1\z\9\1\y\9\6\4\0\j\7\l\o\a\p\h\2\e\k\n\v\k\5\b\a\a\6\m\k\8\w\0\s\f\e\3\u\h\j\c\9\o\c\l\c\8\5\i\b\l\1\l\7\k\n\p\s\a\1\j\y\k\d\5\j\1\g\u\g\l\w\4\8\d\6\r\f\w\a\l\s\o\3\i\6\9\x\8\t\r\1\b\8\h\c\2\w\r\i\q\k\b\c\a\r\3\7\p\h\c\t\2\m\8\x\y\g\i\q\p\t\q\r\z\t\0\a\s\i\o\z\b\g\m\e\f\k\5\t\8\p\v\0\t\0\n\k\c\t\3\k\z\p\9\w\e\6\o\2\t\k\x\q\z\3\r\l\f\7\s\t\4\u\z\w\3\d\2\c\m\3\3\6\t\0\b\q\c\b\w\o\q\5\r\j\p\7\c\g\w\n\w\t\5\0\a\t\a\0\8\n\x\6\y\i\x\y\f\b\z\0\w\y\s\5\8\1\7\v\7\a\5\m\h\z\f\3\6\e\u\9\m\x\4\p\w\9\y\5\y\3\w\s\z\q\4\y\o\7\7\u\c\7\p\q\6\6\1\e\t\9\n\s\e\n\7\v\w\y\n\b\a\a\a\v\t\2\0\s\n\b\e\6\l\d\n\j\d\5\1\7\3\r\c\f\i\v\m\8\z\i\a\c\l\u\2\a\j\a\h\y\c\l\a\6\u\a\o\7\m\z\j\u\r\c\j\d\m\g\2\3\v\u\v\y\l\x\3\8\h\6\2\r\7\3\v\w\5\d\0\q\p\1\u\f\v\s\w\w\7\2\h\a\5\6\g\q\f\8\9\p\0\u\6\7\3\d\a\6\2\a\r\p\g\j\b\6\h\f\f\r\u\3\z\b\s\r\9\u\c\0\9\3\j\9\k\7\a\0\3\r\g\g\s\9\9\4\i\f\h\2\9\b\a\q\0\a\e\z\z\4\g\c\9\n\q\q\f\q\6\u\3\9\9\i\s\5\u\6\r\h\u\8\m\6\w\6\z\2\p\3\l\x\k\b\m\4\2\c\u\f\q\h\y\1\8\m\c\8\7\e\5\e\0\n\7\i\g\h\o\2\7\v\2\m\5\k\6\h\z\v\1\9\h\s\7\e\x\j\n\q\6\z\t\w\x\o\a\r\9\p\w\p\n\1\n\5\5\6\l\7\t\l\y\5\x\l\8\h\e\w\e\x\t\i\c\4\4\p\8\6\8\0\x\l\c\r\u\j\9\i\a\u\q\3\n\o\1\4\s\e\9\v\v\s\h\3\7\m\n\w\l\5\u\c\h\s\u\0\f\f\v\b\q\m\1\f\d\w\c\k\o\c\4\9\v\r\2\o\w\k\m\d\a\a\4\6\h\z\o\y\2\j\h\m\3\0\h\1\q\h\4\z\n\x\0\o\x\v\4\o\t\6\4\a\1\i\n\v\e\e\z\1\r\e\z\3\o\r\k\7\w\x\g\w\r\w\e\t\f\1\p\f\z\b\0\k\b\t\v\1\h\m\j\p\k\p\b\9\o\0\w\p\n\v\z\h\5\1\b\k\c\f\f\7\y\i\d\r\d\z\r\5\7\b\o\j\t\4\i\d\f\1\1\k\3\1\h\e\9\r\t\k\j\f\e\8\6\c\k\c\4\j\h\e\5\o\2\i\m\m\r\9\u\2\g\p\j\t\x\9\8\m\k\3\s\7\9\s\7\j\8\f\l\b\j\i\w\n\z\g\7\j\z\v\9\c\l\z\0\4\w\2\f\e\s\m\b\v\6\x\1\9\k\v\l\w\r\w\f\y\m\v\s\f\e\n\8\2\a\y\e\0\k\u\j\3\1\3\v\7\k\6\o\b\n\7\4\k\8\8\n\v\h\3\m\w\z\v\t\t\d\9\b\3\7\a\b\w\4\0\a\s\o\4\4\0\i\d\p\k\c\v\7\7\z\8\m\8\t\z\l\s\r\6\2\o\m\j\g\l\u\l\u\d\i\b\y\e\x\1\n\m\2\0\6\i\1\1\p\s\b\g\2\1\w\l\o\j\0\n\g\c\7\k\i\z\v\9\t\s\d\v\8\w\e\1\k\x\5\p\u\3\n\0\1\6\p\g\5\n\g\p\a\7\3\b\3\r\r\u\n\g\p\r\x\t\v\f\v\w\o\d\r\t\8\g\2\t\7\j\u\3\o\z\t\b\m\d\h\h\g\c\w\x\m\r\h\5\y\0\q\s\m\l\m\v\5\5\c\l\r\y\n\p\o\w\5\1\p\q\l\d\d\m\f\c\m\p\6\3\x\8\1\k\j\0\6\5\3\4\4\3\d\e\d\f\5\f\p\i\i\u\w\z\d\h\a\f\3\c\b\q\v\1\f\p\6\s\2\o\l\f\7\h\8\6\d\v\9\q\z\y\b\5\1\3\w\h\w\7\6\b\p\a\h\1\8\m\o\v\j\1\c\6\f\5\p\t\l\h\1\i\2\d\t\r\y\8\w\x\q\b\z\6\3\v\j\1\o\2\k\b\y\s\2\l\g\7\1\b\8\p\h\7\i\2\r\o\x\4\d\3\4\x\m\d\i\j\6\x\a\l\4\v\l\3\n\c\a\b\t\u\j\m\6\e\e\0\n\v\j\8\l\x\h\f\x\8\h\d\i\7\c\g\5\v\i\p\r\f\4\4\6\m\n\y\g\h\j\1\b\1\a\b\q\e\9\h\7\4\1\c\n\l\3\r\5\8\o\o\6\r\w\t\5\h\c\p\n\5\s\4\q\q\1\z\3\n\s\t\f\d\t\4\9\r\0\0\v\4\h\1\l\j\y\p\f\p\5\k\o\p\0\a\j\7\j\4\c\8\u\g\g\6\4\q\o\k\j\r\x\a\8\i\x\7\v\a\n\0\b\5\x\5\v\8\8\7\4\8\b\n\f\0\a\l\m\9\1\j\o\r\j\a\4\y\q\9\s\s\q\c\g\4\8\c\w\l\z\l\t\2\7\y\g\8\d\t\7\f\5\b\c\n\2\2\o\7\v\t\z\l\t\k\m\p\e\5\4\6\e\6\c\6\p\j\9\9\c\n\0\p\8\2\0\m\n\h\i\x\8\4 ]] 00:08:17.509 00:08:17.509 real 0m1.177s 00:08:17.509 user 0m0.752s 00:08:17.509 sys 0m0.525s 00:08:17.509 22:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.509 22:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:17.509 22:09:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:17.509 22:09:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:17.509 22:09:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:17.509 22:09:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:17.509 22:09:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:17.509 22:09:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:17.509 22:09:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:17.509 22:09:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:17.509 22:09:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:17.509 22:09:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:17.509 22:09:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:17.509 { 00:08:17.509 "subsystems": [ 00:08:17.509 { 00:08:17.509 "subsystem": "bdev", 00:08:17.509 "config": [ 00:08:17.509 { 00:08:17.509 "params": { 00:08:17.509 "trtype": "pcie", 00:08:17.509 "traddr": "0000:00:10.0", 00:08:17.509 "name": "Nvme0" 00:08:17.509 }, 00:08:17.509 "method": "bdev_nvme_attach_controller" 00:08:17.509 }, 00:08:17.509 { 00:08:17.509 "method": "bdev_wait_for_examine" 00:08:17.509 } 00:08:17.509 ] 00:08:17.509 } 00:08:17.509 ] 00:08:17.509 } 00:08:17.509 [2024-07-23 22:09:49.606748] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:17.509 [2024-07-23 22:09:49.607056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76404 ] 00:08:17.768 [2024-07-23 22:09:49.733370] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:17.768 [2024-07-23 22:09:49.752019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.768 [2024-07-23 22:09:49.800753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.768 [2024-07-23 22:09:49.842204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.026  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:18.026 00:08:18.026 22:09:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:18.026 ************************************ 00:08:18.026 END TEST spdk_dd_basic_rw 00:08:18.026 ************************************ 00:08:18.026 00:08:18.026 real 0m16.028s 00:08:18.026 user 0m10.898s 00:08:18.026 sys 0m6.134s 00:08:18.026 22:09:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.026 22:09:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:18.026 22:09:50 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:18.026 22:09:50 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:18.026 22:09:50 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.026 22:09:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:18.026 ************************************ 00:08:18.026 START TEST spdk_dd_posix 00:08:18.026 ************************************ 00:08:18.026 22:09:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:18.284 * Looking for test storage... 00:08:18.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:18.284 * First test run, liburing in use 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:18.284 ************************************ 00:08:18.284 START TEST dd_flag_append 00:08:18.284 ************************************ 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=abrf0zc3sl9jz89ct46qguel4hcuaqqw 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=zxqialfdj12e3v2nwe4lspzbsag9wli0 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s abrf0zc3sl9jz89ct46qguel4hcuaqqw 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s zxqialfdj12e3v2nwe4lspzbsag9wli0 00:08:18.284 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:18.284 [2024-07-23 22:09:50.360853] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:18.284 [2024-07-23 22:09:50.360979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76463 ] 00:08:18.543 [2024-07-23 22:09:50.486994] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:18.543 [2024-07-23 22:09:50.501600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.543 [2024-07-23 22:09:50.550478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.543 [2024-07-23 22:09:50.591512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.801  Copying: 32/32 [B] (average 31 kBps) 00:08:18.801 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ zxqialfdj12e3v2nwe4lspzbsag9wli0abrf0zc3sl9jz89ct46qguel4hcuaqqw == \z\x\q\i\a\l\f\d\j\1\2\e\3\v\2\n\w\e\4\l\s\p\z\b\s\a\g\9\w\l\i\0\a\b\r\f\0\z\c\3\s\l\9\j\z\8\9\c\t\4\6\q\g\u\e\l\4\h\c\u\a\q\q\w ]] 00:08:18.801 00:08:18.801 real 0m0.483s 00:08:18.801 user 0m0.242s 00:08:18.801 sys 0m0.233s 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.801 ************************************ 00:08:18.801 END TEST dd_flag_append 00:08:18.801 ************************************ 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:18.801 ************************************ 00:08:18.801 START TEST dd_flag_directory 00:08:18.801 ************************************ 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.801 22:09:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.801 [2024-07-23 22:09:50.901033] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:18.801 [2024-07-23 22:09:50.901135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76497 ] 00:08:19.058 [2024-07-23 22:09:51.027318] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:19.058 [2024-07-23 22:09:51.044916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.058 [2024-07-23 22:09:51.093503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.058 [2024-07-23 22:09:51.134214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:19.058 [2024-07-23 22:09:51.155159] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:19.058 [2024-07-23 22:09:51.155211] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:19.058 [2024-07-23 22:09:51.155223] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.058 [2024-07-23 22:09:51.244486] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.316 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:19.316 [2024-07-23 22:09:51.382838] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:19.317 [2024-07-23 22:09:51.383515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76501 ] 00:08:19.317 [2024-07-23 22:09:51.509966] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:19.575 [2024-07-23 22:09:51.529376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.575 [2024-07-23 22:09:51.578405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.575 [2024-07-23 22:09:51.619331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:19.575 [2024-07-23 22:09:51.640259] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:19.575 [2024-07-23 22:09:51.640307] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:19.575 [2024-07-23 22:09:51.640319] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.575 [2024-07-23 22:09:51.729814] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:19.834 00:08:19.834 real 0m0.968s 00:08:19.834 user 0m0.497s 00:08:19.834 sys 0m0.261s 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.834 ************************************ 00:08:19.834 END TEST dd_flag_directory 00:08:19.834 ************************************ 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:19.834 ************************************ 00:08:19.834 START TEST dd_flag_nofollow 00:08:19.834 ************************************ 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.834 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.835 22:09:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.835 [2024-07-23 22:09:51.932966] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:19.835 [2024-07-23 22:09:51.933066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76535 ] 00:08:20.092 [2024-07-23 22:09:52.059525] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:20.092 [2024-07-23 22:09:52.077187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.092 [2024-07-23 22:09:52.125859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.093 [2024-07-23 22:09:52.166744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:20.093 [2024-07-23 22:09:52.187736] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:20.093 [2024-07-23 22:09:52.187786] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:20.093 [2024-07-23 22:09:52.187799] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:20.093 [2024-07-23 22:09:52.276980] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.350 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:20.351 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:20.351 [2024-07-23 22:09:52.419346] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:20.351 [2024-07-23 22:09:52.419448] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76539 ] 00:08:20.608 [2024-07-23 22:09:52.545597] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:20.608 [2024-07-23 22:09:52.563680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.608 [2024-07-23 22:09:52.612615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.608 [2024-07-23 22:09:52.653794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:20.608 [2024-07-23 22:09:52.674983] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:20.608 [2024-07-23 22:09:52.675031] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:20.608 [2024-07-23 22:09:52.675045] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:20.608 [2024-07-23 22:09:52.764448] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:20.866 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:08:20.866 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:20.866 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:08:20.866 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:08:20.866 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:08:20.866 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:20.866 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:20.866 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:20.866 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:20.866 22:09:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:20.866 [2024-07-23 22:09:52.906497] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:20.866 [2024-07-23 22:09:52.906586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76552 ] 00:08:20.866 [2024-07-23 22:09:53.033000] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:20.866 [2024-07-23 22:09:53.049027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.124 [2024-07-23 22:09:53.097443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.124 [2024-07-23 22:09:53.138386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:21.380  Copying: 512/512 [B] (average 500 kBps) 00:08:21.380 00:08:21.380 ************************************ 00:08:21.380 END TEST dd_flag_nofollow 00:08:21.380 ************************************ 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ ba6vfvg9hqs1djwrdnv3j48x4kg3l20aqqh0tg3d8zw0qdnautffo1nvbccyxzd9qadvm7nb7tj6zdngektpg20ab3wso5hykt1ohzc9nxo7xqw2g9qofulde39xtohnw14r29pror11a7o7ryh3mc9regpzu95upfhl24xzv0nutcp6ycdoz277nmh2jn5zx83g61kjlekw93o9iwbfj6mjj5aqrlj2c1pabla8ydvp91yuppsf2ljl0lchuz08zugqnx8ujfo6a5ddswyo3csw6pxxo9v6md8ty3tlwylv6h6yg5imxyohl22scsuc4ir5ckfh6gikqq3qj8w5sr7z3iprwm6c7zqy0f3i3xy8bypk9x4hpl3wodbo6823czon378i10phogkdr0xt9vpzsglwe9xcom3njpxmsh033dyjd0jssrz3dn25clnivscmkxxqwn94d0zb0gulqu5evemabgrdwkp2hlcrkcsytfbljmcfev1e71uziw59 == \b\a\6\v\f\v\g\9\h\q\s\1\d\j\w\r\d\n\v\3\j\4\8\x\4\k\g\3\l\2\0\a\q\q\h\0\t\g\3\d\8\z\w\0\q\d\n\a\u\t\f\f\o\1\n\v\b\c\c\y\x\z\d\9\q\a\d\v\m\7\n\b\7\t\j\6\z\d\n\g\e\k\t\p\g\2\0\a\b\3\w\s\o\5\h\y\k\t\1\o\h\z\c\9\n\x\o\7\x\q\w\2\g\9\q\o\f\u\l\d\e\3\9\x\t\o\h\n\w\1\4\r\2\9\p\r\o\r\1\1\a\7\o\7\r\y\h\3\m\c\9\r\e\g\p\z\u\9\5\u\p\f\h\l\2\4\x\z\v\0\n\u\t\c\p\6\y\c\d\o\z\2\7\7\n\m\h\2\j\n\5\z\x\8\3\g\6\1\k\j\l\e\k\w\9\3\o\9\i\w\b\f\j\6\m\j\j\5\a\q\r\l\j\2\c\1\p\a\b\l\a\8\y\d\v\p\9\1\y\u\p\p\s\f\2\l\j\l\0\l\c\h\u\z\0\8\z\u\g\q\n\x\8\u\j\f\o\6\a\5\d\d\s\w\y\o\3\c\s\w\6\p\x\x\o\9\v\6\m\d\8\t\y\3\t\l\w\y\l\v\6\h\6\y\g\5\i\m\x\y\o\h\l\2\2\s\c\s\u\c\4\i\r\5\c\k\f\h\6\g\i\k\q\q\3\q\j\8\w\5\s\r\7\z\3\i\p\r\w\m\6\c\7\z\q\y\0\f\3\i\3\x\y\8\b\y\p\k\9\x\4\h\p\l\3\w\o\d\b\o\6\8\2\3\c\z\o\n\3\7\8\i\1\0\p\h\o\g\k\d\r\0\x\t\9\v\p\z\s\g\l\w\e\9\x\c\o\m\3\n\j\p\x\m\s\h\0\3\3\d\y\j\d\0\j\s\s\r\z\3\d\n\2\5\c\l\n\i\v\s\c\m\k\x\x\q\w\n\9\4\d\0\z\b\0\g\u\l\q\u\5\e\v\e\m\a\b\g\r\d\w\k\p\2\h\l\c\r\k\c\s\y\t\f\b\l\j\m\c\f\e\v\1\e\7\1\u\z\i\w\5\9 ]] 00:08:21.380 00:08:21.380 real 0m1.461s 00:08:21.380 user 0m0.744s 00:08:21.380 sys 0m0.506s 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:21.380 ************************************ 00:08:21.380 START TEST dd_flag_noatime 00:08:21.380 ************************************ 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721772593 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721772593 00:08:21.380 22:09:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:22.364 22:09:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.364 [2024-07-23 22:09:54.477931] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:22.364 [2024-07-23 22:09:54.478026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76589 ] 00:08:22.622 [2024-07-23 22:09:54.604707] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:22.622 [2024-07-23 22:09:54.620936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.622 [2024-07-23 22:09:54.669765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.622 [2024-07-23 22:09:54.711042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.880  Copying: 512/512 [B] (average 500 kBps) 00:08:22.880 00:08:22.880 22:09:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:22.880 22:09:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721772593 )) 00:08:22.880 22:09:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.880 22:09:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721772593 )) 00:08:22.880 22:09:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.880 [2024-07-23 22:09:54.966659] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:22.880 [2024-07-23 22:09:54.966760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76608 ] 00:08:23.138 [2024-07-23 22:09:55.093001] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:23.138 [2024-07-23 22:09:55.108943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.138 [2024-07-23 22:09:55.157882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.138 [2024-07-23 22:09:55.198698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:23.395  Copying: 512/512 [B] (average 500 kBps) 00:08:23.395 00:08:23.395 22:09:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:23.395 ************************************ 00:08:23.395 END TEST dd_flag_noatime 00:08:23.395 ************************************ 00:08:23.395 22:09:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721772595 )) 00:08:23.395 00:08:23.395 real 0m2.005s 00:08:23.395 user 0m0.510s 00:08:23.395 sys 0m0.486s 00:08:23.395 22:09:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.395 22:09:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:23.396 22:09:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:23.396 22:09:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:23.396 22:09:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.396 22:09:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:23.396 ************************************ 00:08:23.396 START TEST dd_flags_misc 00:08:23.396 ************************************ 00:08:23.396 22:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:08:23.396 22:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:23.396 22:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:23.396 22:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:23.396 22:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:23.396 22:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:23.396 22:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:23.396 22:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:23.396 22:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.396 22:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:23.396 [2024-07-23 22:09:55.521383] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:23.396 [2024-07-23 22:09:55.521487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76631 ] 00:08:23.655 [2024-07-23 22:09:55.647739] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:23.655 [2024-07-23 22:09:55.664917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.655 [2024-07-23 22:09:55.713708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.655 [2024-07-23 22:09:55.754544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:23.913  Copying: 512/512 [B] (average 500 kBps) 00:08:23.913 00:08:23.913 22:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ il8cyvfryigx7tmiwbzuwvsvqo5rvzcyjovg3krkzd5tltnukhwl0u0zzwyo0gg7qbx80np1g7s8adfp0c4xu6x3oj5024slxdi44qajotp2sym13runchhbrsj7bqtaxw2y87g8saoi6q13rqol43tc5kvialx0xuig93s33rk7nikf4d8jto7jd4410louaj5don56cg154qoc9sd2wq5ocx6bn47veqankxqrdb865uqasmk75vmz6plem2qc1eu8k9h8nd3l3qug0w2q3mv5matzots0sw79judvxtlfsaiap8qjo3ag7ndc6iklykrrbk2e58v5rg26zn908zllyxi7r6xzrmxrf8dth808yxs9fds22ppf6728fsll5wle4oba3tim15yllzly4vlxpcafj4geafb1qo62nky0kayvzdty1ldg4ke1ihkk7o4yr6w3e7wbsqgjvhca0pl119vg350li0z6r85fl1y57sevbusmzrx47ehv8goh == \i\l\8\c\y\v\f\r\y\i\g\x\7\t\m\i\w\b\z\u\w\v\s\v\q\o\5\r\v\z\c\y\j\o\v\g\3\k\r\k\z\d\5\t\l\t\n\u\k\h\w\l\0\u\0\z\z\w\y\o\0\g\g\7\q\b\x\8\0\n\p\1\g\7\s\8\a\d\f\p\0\c\4\x\u\6\x\3\o\j\5\0\2\4\s\l\x\d\i\4\4\q\a\j\o\t\p\2\s\y\m\1\3\r\u\n\c\h\h\b\r\s\j\7\b\q\t\a\x\w\2\y\8\7\g\8\s\a\o\i\6\q\1\3\r\q\o\l\4\3\t\c\5\k\v\i\a\l\x\0\x\u\i\g\9\3\s\3\3\r\k\7\n\i\k\f\4\d\8\j\t\o\7\j\d\4\4\1\0\l\o\u\a\j\5\d\o\n\5\6\c\g\1\5\4\q\o\c\9\s\d\2\w\q\5\o\c\x\6\b\n\4\7\v\e\q\a\n\k\x\q\r\d\b\8\6\5\u\q\a\s\m\k\7\5\v\m\z\6\p\l\e\m\2\q\c\1\e\u\8\k\9\h\8\n\d\3\l\3\q\u\g\0\w\2\q\3\m\v\5\m\a\t\z\o\t\s\0\s\w\7\9\j\u\d\v\x\t\l\f\s\a\i\a\p\8\q\j\o\3\a\g\7\n\d\c\6\i\k\l\y\k\r\r\b\k\2\e\5\8\v\5\r\g\2\6\z\n\9\0\8\z\l\l\y\x\i\7\r\6\x\z\r\m\x\r\f\8\d\t\h\8\0\8\y\x\s\9\f\d\s\2\2\p\p\f\6\7\2\8\f\s\l\l\5\w\l\e\4\o\b\a\3\t\i\m\1\5\y\l\l\z\l\y\4\v\l\x\p\c\a\f\j\4\g\e\a\f\b\1\q\o\6\2\n\k\y\0\k\a\y\v\z\d\t\y\1\l\d\g\4\k\e\1\i\h\k\k\7\o\4\y\r\6\w\3\e\7\w\b\s\q\g\j\v\h\c\a\0\p\l\1\1\9\v\g\3\5\0\l\i\0\z\6\r\8\5\f\l\1\y\5\7\s\e\v\b\u\s\m\z\r\x\4\7\e\h\v\8\g\o\h ]] 00:08:23.913 22:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.913 22:09:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:23.913 [2024-07-23 22:09:55.980652] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:23.913 [2024-07-23 22:09:55.980730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76646 ] 00:08:23.913 [2024-07-23 22:09:56.097708] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.171 [2024-07-23 22:09:56.111030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.171 [2024-07-23 22:09:56.159735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.171 [2024-07-23 22:09:56.200642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:24.429  Copying: 512/512 [B] (average 500 kBps) 00:08:24.429 00:08:24.429 22:09:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ il8cyvfryigx7tmiwbzuwvsvqo5rvzcyjovg3krkzd5tltnukhwl0u0zzwyo0gg7qbx80np1g7s8adfp0c4xu6x3oj5024slxdi44qajotp2sym13runchhbrsj7bqtaxw2y87g8saoi6q13rqol43tc5kvialx0xuig93s33rk7nikf4d8jto7jd4410louaj5don56cg154qoc9sd2wq5ocx6bn47veqankxqrdb865uqasmk75vmz6plem2qc1eu8k9h8nd3l3qug0w2q3mv5matzots0sw79judvxtlfsaiap8qjo3ag7ndc6iklykrrbk2e58v5rg26zn908zllyxi7r6xzrmxrf8dth808yxs9fds22ppf6728fsll5wle4oba3tim15yllzly4vlxpcafj4geafb1qo62nky0kayvzdty1ldg4ke1ihkk7o4yr6w3e7wbsqgjvhca0pl119vg350li0z6r85fl1y57sevbusmzrx47ehv8goh == \i\l\8\c\y\v\f\r\y\i\g\x\7\t\m\i\w\b\z\u\w\v\s\v\q\o\5\r\v\z\c\y\j\o\v\g\3\k\r\k\z\d\5\t\l\t\n\u\k\h\w\l\0\u\0\z\z\w\y\o\0\g\g\7\q\b\x\8\0\n\p\1\g\7\s\8\a\d\f\p\0\c\4\x\u\6\x\3\o\j\5\0\2\4\s\l\x\d\i\4\4\q\a\j\o\t\p\2\s\y\m\1\3\r\u\n\c\h\h\b\r\s\j\7\b\q\t\a\x\w\2\y\8\7\g\8\s\a\o\i\6\q\1\3\r\q\o\l\4\3\t\c\5\k\v\i\a\l\x\0\x\u\i\g\9\3\s\3\3\r\k\7\n\i\k\f\4\d\8\j\t\o\7\j\d\4\4\1\0\l\o\u\a\j\5\d\o\n\5\6\c\g\1\5\4\q\o\c\9\s\d\2\w\q\5\o\c\x\6\b\n\4\7\v\e\q\a\n\k\x\q\r\d\b\8\6\5\u\q\a\s\m\k\7\5\v\m\z\6\p\l\e\m\2\q\c\1\e\u\8\k\9\h\8\n\d\3\l\3\q\u\g\0\w\2\q\3\m\v\5\m\a\t\z\o\t\s\0\s\w\7\9\j\u\d\v\x\t\l\f\s\a\i\a\p\8\q\j\o\3\a\g\7\n\d\c\6\i\k\l\y\k\r\r\b\k\2\e\5\8\v\5\r\g\2\6\z\n\9\0\8\z\l\l\y\x\i\7\r\6\x\z\r\m\x\r\f\8\d\t\h\8\0\8\y\x\s\9\f\d\s\2\2\p\p\f\6\7\2\8\f\s\l\l\5\w\l\e\4\o\b\a\3\t\i\m\1\5\y\l\l\z\l\y\4\v\l\x\p\c\a\f\j\4\g\e\a\f\b\1\q\o\6\2\n\k\y\0\k\a\y\v\z\d\t\y\1\l\d\g\4\k\e\1\i\h\k\k\7\o\4\y\r\6\w\3\e\7\w\b\s\q\g\j\v\h\c\a\0\p\l\1\1\9\v\g\3\5\0\l\i\0\z\6\r\8\5\f\l\1\y\5\7\s\e\v\b\u\s\m\z\r\x\4\7\e\h\v\8\g\o\h ]] 00:08:24.429 22:09:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.429 22:09:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:24.429 [2024-07-23 22:09:56.442383] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:24.429 [2024-07-23 22:09:56.442488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76650 ] 00:08:24.429 [2024-07-23 22:09:56.568962] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.429 [2024-07-23 22:09:56.585001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.688 [2024-07-23 22:09:56.633839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.688 [2024-07-23 22:09:56.674826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:24.688  Copying: 512/512 [B] (average 83 kBps) 00:08:24.688 00:08:24.688 22:09:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ il8cyvfryigx7tmiwbzuwvsvqo5rvzcyjovg3krkzd5tltnukhwl0u0zzwyo0gg7qbx80np1g7s8adfp0c4xu6x3oj5024slxdi44qajotp2sym13runchhbrsj7bqtaxw2y87g8saoi6q13rqol43tc5kvialx0xuig93s33rk7nikf4d8jto7jd4410louaj5don56cg154qoc9sd2wq5ocx6bn47veqankxqrdb865uqasmk75vmz6plem2qc1eu8k9h8nd3l3qug0w2q3mv5matzots0sw79judvxtlfsaiap8qjo3ag7ndc6iklykrrbk2e58v5rg26zn908zllyxi7r6xzrmxrf8dth808yxs9fds22ppf6728fsll5wle4oba3tim15yllzly4vlxpcafj4geafb1qo62nky0kayvzdty1ldg4ke1ihkk7o4yr6w3e7wbsqgjvhca0pl119vg350li0z6r85fl1y57sevbusmzrx47ehv8goh == \i\l\8\c\y\v\f\r\y\i\g\x\7\t\m\i\w\b\z\u\w\v\s\v\q\o\5\r\v\z\c\y\j\o\v\g\3\k\r\k\z\d\5\t\l\t\n\u\k\h\w\l\0\u\0\z\z\w\y\o\0\g\g\7\q\b\x\8\0\n\p\1\g\7\s\8\a\d\f\p\0\c\4\x\u\6\x\3\o\j\5\0\2\4\s\l\x\d\i\4\4\q\a\j\o\t\p\2\s\y\m\1\3\r\u\n\c\h\h\b\r\s\j\7\b\q\t\a\x\w\2\y\8\7\g\8\s\a\o\i\6\q\1\3\r\q\o\l\4\3\t\c\5\k\v\i\a\l\x\0\x\u\i\g\9\3\s\3\3\r\k\7\n\i\k\f\4\d\8\j\t\o\7\j\d\4\4\1\0\l\o\u\a\j\5\d\o\n\5\6\c\g\1\5\4\q\o\c\9\s\d\2\w\q\5\o\c\x\6\b\n\4\7\v\e\q\a\n\k\x\q\r\d\b\8\6\5\u\q\a\s\m\k\7\5\v\m\z\6\p\l\e\m\2\q\c\1\e\u\8\k\9\h\8\n\d\3\l\3\q\u\g\0\w\2\q\3\m\v\5\m\a\t\z\o\t\s\0\s\w\7\9\j\u\d\v\x\t\l\f\s\a\i\a\p\8\q\j\o\3\a\g\7\n\d\c\6\i\k\l\y\k\r\r\b\k\2\e\5\8\v\5\r\g\2\6\z\n\9\0\8\z\l\l\y\x\i\7\r\6\x\z\r\m\x\r\f\8\d\t\h\8\0\8\y\x\s\9\f\d\s\2\2\p\p\f\6\7\2\8\f\s\l\l\5\w\l\e\4\o\b\a\3\t\i\m\1\5\y\l\l\z\l\y\4\v\l\x\p\c\a\f\j\4\g\e\a\f\b\1\q\o\6\2\n\k\y\0\k\a\y\v\z\d\t\y\1\l\d\g\4\k\e\1\i\h\k\k\7\o\4\y\r\6\w\3\e\7\w\b\s\q\g\j\v\h\c\a\0\p\l\1\1\9\v\g\3\5\0\l\i\0\z\6\r\8\5\f\l\1\y\5\7\s\e\v\b\u\s\m\z\r\x\4\7\e\h\v\8\g\o\h ]] 00:08:24.688 22:09:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.688 22:09:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:24.945 [2024-07-23 22:09:56.909525] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:24.945 [2024-07-23 22:09:56.909600] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76659 ] 00:08:24.945 [2024-07-23 22:09:57.026719] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.945 [2024-07-23 22:09:57.043812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.945 [2024-07-23 22:09:57.092606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.945 [2024-07-23 22:09:57.133609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:25.203  Copying: 512/512 [B] (average 250 kBps) 00:08:25.203 00:08:25.203 22:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ il8cyvfryigx7tmiwbzuwvsvqo5rvzcyjovg3krkzd5tltnukhwl0u0zzwyo0gg7qbx80np1g7s8adfp0c4xu6x3oj5024slxdi44qajotp2sym13runchhbrsj7bqtaxw2y87g8saoi6q13rqol43tc5kvialx0xuig93s33rk7nikf4d8jto7jd4410louaj5don56cg154qoc9sd2wq5ocx6bn47veqankxqrdb865uqasmk75vmz6plem2qc1eu8k9h8nd3l3qug0w2q3mv5matzots0sw79judvxtlfsaiap8qjo3ag7ndc6iklykrrbk2e58v5rg26zn908zllyxi7r6xzrmxrf8dth808yxs9fds22ppf6728fsll5wle4oba3tim15yllzly4vlxpcafj4geafb1qo62nky0kayvzdty1ldg4ke1ihkk7o4yr6w3e7wbsqgjvhca0pl119vg350li0z6r85fl1y57sevbusmzrx47ehv8goh == \i\l\8\c\y\v\f\r\y\i\g\x\7\t\m\i\w\b\z\u\w\v\s\v\q\o\5\r\v\z\c\y\j\o\v\g\3\k\r\k\z\d\5\t\l\t\n\u\k\h\w\l\0\u\0\z\z\w\y\o\0\g\g\7\q\b\x\8\0\n\p\1\g\7\s\8\a\d\f\p\0\c\4\x\u\6\x\3\o\j\5\0\2\4\s\l\x\d\i\4\4\q\a\j\o\t\p\2\s\y\m\1\3\r\u\n\c\h\h\b\r\s\j\7\b\q\t\a\x\w\2\y\8\7\g\8\s\a\o\i\6\q\1\3\r\q\o\l\4\3\t\c\5\k\v\i\a\l\x\0\x\u\i\g\9\3\s\3\3\r\k\7\n\i\k\f\4\d\8\j\t\o\7\j\d\4\4\1\0\l\o\u\a\j\5\d\o\n\5\6\c\g\1\5\4\q\o\c\9\s\d\2\w\q\5\o\c\x\6\b\n\4\7\v\e\q\a\n\k\x\q\r\d\b\8\6\5\u\q\a\s\m\k\7\5\v\m\z\6\p\l\e\m\2\q\c\1\e\u\8\k\9\h\8\n\d\3\l\3\q\u\g\0\w\2\q\3\m\v\5\m\a\t\z\o\t\s\0\s\w\7\9\j\u\d\v\x\t\l\f\s\a\i\a\p\8\q\j\o\3\a\g\7\n\d\c\6\i\k\l\y\k\r\r\b\k\2\e\5\8\v\5\r\g\2\6\z\n\9\0\8\z\l\l\y\x\i\7\r\6\x\z\r\m\x\r\f\8\d\t\h\8\0\8\y\x\s\9\f\d\s\2\2\p\p\f\6\7\2\8\f\s\l\l\5\w\l\e\4\o\b\a\3\t\i\m\1\5\y\l\l\z\l\y\4\v\l\x\p\c\a\f\j\4\g\e\a\f\b\1\q\o\6\2\n\k\y\0\k\a\y\v\z\d\t\y\1\l\d\g\4\k\e\1\i\h\k\k\7\o\4\y\r\6\w\3\e\7\w\b\s\q\g\j\v\h\c\a\0\p\l\1\1\9\v\g\3\5\0\l\i\0\z\6\r\8\5\f\l\1\y\5\7\s\e\v\b\u\s\m\z\r\x\4\7\e\h\v\8\g\o\h ]] 00:08:25.203 22:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:25.203 22:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:25.203 22:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:25.203 22:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:25.203 22:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.203 22:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:25.203 [2024-07-23 22:09:57.395748] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:25.203 [2024-07-23 22:09:57.396127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76669 ] 00:08:25.461 [2024-07-23 22:09:57.523397] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:25.461 [2024-07-23 22:09:57.537768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.461 [2024-07-23 22:09:57.586287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.461 [2024-07-23 22:09:57.627114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:25.720  Copying: 512/512 [B] (average 500 kBps) 00:08:25.720 00:08:25.720 22:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ y92gk0rmmct1ejlco2imdo7evc9a2l7h9w7ceer5z7q5ukeyekv8mw2w1n818ncla0tbqgjwktrfmxbrs45h2ihaznx5sq05hwxg8gloo9fih6ybh1yp09qxnvjbp0id9x10nx6tejhops4l8ks9sxjpylkb3zpy4xfk2vrwv2d1dgtn7evpq37qokdcmeoercdf9xikuz60fmtqpefc8ghf2ic4xm2a44knx24mrrjbtt2oc7ywv29b327bgt0kj0eh5sxwk4y0xh2ueic4dbpxtuah5u0h3m6zd5m6t7zvp0ve9mk9yefweq0ii8nw0su6tp7iccejhicm64o53ru1boxpsocb2v9ulacciy72dhtr8xs9tnucon4lrzd17msm4fxo5dl2dodmmbi95qrvt91eev1zh88t4nvgjy0wu0ia8zqm3zs0kzz83h3tq5fjkce2wszroyjxd75s5qu86cvyinej9hksbj5asw2c4ehyn6h0ra2yjk8yuepg == \y\9\2\g\k\0\r\m\m\c\t\1\e\j\l\c\o\2\i\m\d\o\7\e\v\c\9\a\2\l\7\h\9\w\7\c\e\e\r\5\z\7\q\5\u\k\e\y\e\k\v\8\m\w\2\w\1\n\8\1\8\n\c\l\a\0\t\b\q\g\j\w\k\t\r\f\m\x\b\r\s\4\5\h\2\i\h\a\z\n\x\5\s\q\0\5\h\w\x\g\8\g\l\o\o\9\f\i\h\6\y\b\h\1\y\p\0\9\q\x\n\v\j\b\p\0\i\d\9\x\1\0\n\x\6\t\e\j\h\o\p\s\4\l\8\k\s\9\s\x\j\p\y\l\k\b\3\z\p\y\4\x\f\k\2\v\r\w\v\2\d\1\d\g\t\n\7\e\v\p\q\3\7\q\o\k\d\c\m\e\o\e\r\c\d\f\9\x\i\k\u\z\6\0\f\m\t\q\p\e\f\c\8\g\h\f\2\i\c\4\x\m\2\a\4\4\k\n\x\2\4\m\r\r\j\b\t\t\2\o\c\7\y\w\v\2\9\b\3\2\7\b\g\t\0\k\j\0\e\h\5\s\x\w\k\4\y\0\x\h\2\u\e\i\c\4\d\b\p\x\t\u\a\h\5\u\0\h\3\m\6\z\d\5\m\6\t\7\z\v\p\0\v\e\9\m\k\9\y\e\f\w\e\q\0\i\i\8\n\w\0\s\u\6\t\p\7\i\c\c\e\j\h\i\c\m\6\4\o\5\3\r\u\1\b\o\x\p\s\o\c\b\2\v\9\u\l\a\c\c\i\y\7\2\d\h\t\r\8\x\s\9\t\n\u\c\o\n\4\l\r\z\d\1\7\m\s\m\4\f\x\o\5\d\l\2\d\o\d\m\m\b\i\9\5\q\r\v\t\9\1\e\e\v\1\z\h\8\8\t\4\n\v\g\j\y\0\w\u\0\i\a\8\z\q\m\3\z\s\0\k\z\z\8\3\h\3\t\q\5\f\j\k\c\e\2\w\s\z\r\o\y\j\x\d\7\5\s\5\q\u\8\6\c\v\y\i\n\e\j\9\h\k\s\b\j\5\a\s\w\2\c\4\e\h\y\n\6\h\0\r\a\2\y\j\k\8\y\u\e\p\g ]] 00:08:25.720 22:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.720 22:09:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:25.720 [2024-07-23 22:09:57.874539] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:25.720 [2024-07-23 22:09:57.874632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76673 ] 00:08:25.978 [2024-07-23 22:09:58.006818] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:25.978 [2024-07-23 22:09:58.024985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.978 [2024-07-23 22:09:58.073572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.978 [2024-07-23 22:09:58.114275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:26.236  Copying: 512/512 [B] (average 500 kBps) 00:08:26.236 00:08:26.236 22:09:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ y92gk0rmmct1ejlco2imdo7evc9a2l7h9w7ceer5z7q5ukeyekv8mw2w1n818ncla0tbqgjwktrfmxbrs45h2ihaznx5sq05hwxg8gloo9fih6ybh1yp09qxnvjbp0id9x10nx6tejhops4l8ks9sxjpylkb3zpy4xfk2vrwv2d1dgtn7evpq37qokdcmeoercdf9xikuz60fmtqpefc8ghf2ic4xm2a44knx24mrrjbtt2oc7ywv29b327bgt0kj0eh5sxwk4y0xh2ueic4dbpxtuah5u0h3m6zd5m6t7zvp0ve9mk9yefweq0ii8nw0su6tp7iccejhicm64o53ru1boxpsocb2v9ulacciy72dhtr8xs9tnucon4lrzd17msm4fxo5dl2dodmmbi95qrvt91eev1zh88t4nvgjy0wu0ia8zqm3zs0kzz83h3tq5fjkce2wszroyjxd75s5qu86cvyinej9hksbj5asw2c4ehyn6h0ra2yjk8yuepg == \y\9\2\g\k\0\r\m\m\c\t\1\e\j\l\c\o\2\i\m\d\o\7\e\v\c\9\a\2\l\7\h\9\w\7\c\e\e\r\5\z\7\q\5\u\k\e\y\e\k\v\8\m\w\2\w\1\n\8\1\8\n\c\l\a\0\t\b\q\g\j\w\k\t\r\f\m\x\b\r\s\4\5\h\2\i\h\a\z\n\x\5\s\q\0\5\h\w\x\g\8\g\l\o\o\9\f\i\h\6\y\b\h\1\y\p\0\9\q\x\n\v\j\b\p\0\i\d\9\x\1\0\n\x\6\t\e\j\h\o\p\s\4\l\8\k\s\9\s\x\j\p\y\l\k\b\3\z\p\y\4\x\f\k\2\v\r\w\v\2\d\1\d\g\t\n\7\e\v\p\q\3\7\q\o\k\d\c\m\e\o\e\r\c\d\f\9\x\i\k\u\z\6\0\f\m\t\q\p\e\f\c\8\g\h\f\2\i\c\4\x\m\2\a\4\4\k\n\x\2\4\m\r\r\j\b\t\t\2\o\c\7\y\w\v\2\9\b\3\2\7\b\g\t\0\k\j\0\e\h\5\s\x\w\k\4\y\0\x\h\2\u\e\i\c\4\d\b\p\x\t\u\a\h\5\u\0\h\3\m\6\z\d\5\m\6\t\7\z\v\p\0\v\e\9\m\k\9\y\e\f\w\e\q\0\i\i\8\n\w\0\s\u\6\t\p\7\i\c\c\e\j\h\i\c\m\6\4\o\5\3\r\u\1\b\o\x\p\s\o\c\b\2\v\9\u\l\a\c\c\i\y\7\2\d\h\t\r\8\x\s\9\t\n\u\c\o\n\4\l\r\z\d\1\7\m\s\m\4\f\x\o\5\d\l\2\d\o\d\m\m\b\i\9\5\q\r\v\t\9\1\e\e\v\1\z\h\8\8\t\4\n\v\g\j\y\0\w\u\0\i\a\8\z\q\m\3\z\s\0\k\z\z\8\3\h\3\t\q\5\f\j\k\c\e\2\w\s\z\r\o\y\j\x\d\7\5\s\5\q\u\8\6\c\v\y\i\n\e\j\9\h\k\s\b\j\5\a\s\w\2\c\4\e\h\y\n\6\h\0\r\a\2\y\j\k\8\y\u\e\p\g ]] 00:08:26.236 22:09:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:26.236 22:09:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:26.236 [2024-07-23 22:09:58.362623] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:26.236 [2024-07-23 22:09:58.362715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76688 ] 00:08:26.494 [2024-07-23 22:09:58.489786] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.494 [2024-07-23 22:09:58.506052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.494 [2024-07-23 22:09:58.554651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.494 [2024-07-23 22:09:58.595327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:26.751  Copying: 512/512 [B] (average 250 kBps) 00:08:26.751 00:08:26.752 22:09:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ y92gk0rmmct1ejlco2imdo7evc9a2l7h9w7ceer5z7q5ukeyekv8mw2w1n818ncla0tbqgjwktrfmxbrs45h2ihaznx5sq05hwxg8gloo9fih6ybh1yp09qxnvjbp0id9x10nx6tejhops4l8ks9sxjpylkb3zpy4xfk2vrwv2d1dgtn7evpq37qokdcmeoercdf9xikuz60fmtqpefc8ghf2ic4xm2a44knx24mrrjbtt2oc7ywv29b327bgt0kj0eh5sxwk4y0xh2ueic4dbpxtuah5u0h3m6zd5m6t7zvp0ve9mk9yefweq0ii8nw0su6tp7iccejhicm64o53ru1boxpsocb2v9ulacciy72dhtr8xs9tnucon4lrzd17msm4fxo5dl2dodmmbi95qrvt91eev1zh88t4nvgjy0wu0ia8zqm3zs0kzz83h3tq5fjkce2wszroyjxd75s5qu86cvyinej9hksbj5asw2c4ehyn6h0ra2yjk8yuepg == \y\9\2\g\k\0\r\m\m\c\t\1\e\j\l\c\o\2\i\m\d\o\7\e\v\c\9\a\2\l\7\h\9\w\7\c\e\e\r\5\z\7\q\5\u\k\e\y\e\k\v\8\m\w\2\w\1\n\8\1\8\n\c\l\a\0\t\b\q\g\j\w\k\t\r\f\m\x\b\r\s\4\5\h\2\i\h\a\z\n\x\5\s\q\0\5\h\w\x\g\8\g\l\o\o\9\f\i\h\6\y\b\h\1\y\p\0\9\q\x\n\v\j\b\p\0\i\d\9\x\1\0\n\x\6\t\e\j\h\o\p\s\4\l\8\k\s\9\s\x\j\p\y\l\k\b\3\z\p\y\4\x\f\k\2\v\r\w\v\2\d\1\d\g\t\n\7\e\v\p\q\3\7\q\o\k\d\c\m\e\o\e\r\c\d\f\9\x\i\k\u\z\6\0\f\m\t\q\p\e\f\c\8\g\h\f\2\i\c\4\x\m\2\a\4\4\k\n\x\2\4\m\r\r\j\b\t\t\2\o\c\7\y\w\v\2\9\b\3\2\7\b\g\t\0\k\j\0\e\h\5\s\x\w\k\4\y\0\x\h\2\u\e\i\c\4\d\b\p\x\t\u\a\h\5\u\0\h\3\m\6\z\d\5\m\6\t\7\z\v\p\0\v\e\9\m\k\9\y\e\f\w\e\q\0\i\i\8\n\w\0\s\u\6\t\p\7\i\c\c\e\j\h\i\c\m\6\4\o\5\3\r\u\1\b\o\x\p\s\o\c\b\2\v\9\u\l\a\c\c\i\y\7\2\d\h\t\r\8\x\s\9\t\n\u\c\o\n\4\l\r\z\d\1\7\m\s\m\4\f\x\o\5\d\l\2\d\o\d\m\m\b\i\9\5\q\r\v\t\9\1\e\e\v\1\z\h\8\8\t\4\n\v\g\j\y\0\w\u\0\i\a\8\z\q\m\3\z\s\0\k\z\z\8\3\h\3\t\q\5\f\j\k\c\e\2\w\s\z\r\o\y\j\x\d\7\5\s\5\q\u\8\6\c\v\y\i\n\e\j\9\h\k\s\b\j\5\a\s\w\2\c\4\e\h\y\n\6\h\0\r\a\2\y\j\k\8\y\u\e\p\g ]] 00:08:26.752 22:09:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:26.752 22:09:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:26.752 [2024-07-23 22:09:58.843569] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:26.752 [2024-07-23 22:09:58.843672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76692 ] 00:08:27.009 [2024-07-23 22:09:58.969773] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:27.009 [2024-07-23 22:09:58.988580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.009 [2024-07-23 22:09:59.037578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.009 [2024-07-23 22:09:59.078435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:27.267  Copying: 512/512 [B] (average 125 kBps) 00:08:27.267 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ y92gk0rmmct1ejlco2imdo7evc9a2l7h9w7ceer5z7q5ukeyekv8mw2w1n818ncla0tbqgjwktrfmxbrs45h2ihaznx5sq05hwxg8gloo9fih6ybh1yp09qxnvjbp0id9x10nx6tejhops4l8ks9sxjpylkb3zpy4xfk2vrwv2d1dgtn7evpq37qokdcmeoercdf9xikuz60fmtqpefc8ghf2ic4xm2a44knx24mrrjbtt2oc7ywv29b327bgt0kj0eh5sxwk4y0xh2ueic4dbpxtuah5u0h3m6zd5m6t7zvp0ve9mk9yefweq0ii8nw0su6tp7iccejhicm64o53ru1boxpsocb2v9ulacciy72dhtr8xs9tnucon4lrzd17msm4fxo5dl2dodmmbi95qrvt91eev1zh88t4nvgjy0wu0ia8zqm3zs0kzz83h3tq5fjkce2wszroyjxd75s5qu86cvyinej9hksbj5asw2c4ehyn6h0ra2yjk8yuepg == \y\9\2\g\k\0\r\m\m\c\t\1\e\j\l\c\o\2\i\m\d\o\7\e\v\c\9\a\2\l\7\h\9\w\7\c\e\e\r\5\z\7\q\5\u\k\e\y\e\k\v\8\m\w\2\w\1\n\8\1\8\n\c\l\a\0\t\b\q\g\j\w\k\t\r\f\m\x\b\r\s\4\5\h\2\i\h\a\z\n\x\5\s\q\0\5\h\w\x\g\8\g\l\o\o\9\f\i\h\6\y\b\h\1\y\p\0\9\q\x\n\v\j\b\p\0\i\d\9\x\1\0\n\x\6\t\e\j\h\o\p\s\4\l\8\k\s\9\s\x\j\p\y\l\k\b\3\z\p\y\4\x\f\k\2\v\r\w\v\2\d\1\d\g\t\n\7\e\v\p\q\3\7\q\o\k\d\c\m\e\o\e\r\c\d\f\9\x\i\k\u\z\6\0\f\m\t\q\p\e\f\c\8\g\h\f\2\i\c\4\x\m\2\a\4\4\k\n\x\2\4\m\r\r\j\b\t\t\2\o\c\7\y\w\v\2\9\b\3\2\7\b\g\t\0\k\j\0\e\h\5\s\x\w\k\4\y\0\x\h\2\u\e\i\c\4\d\b\p\x\t\u\a\h\5\u\0\h\3\m\6\z\d\5\m\6\t\7\z\v\p\0\v\e\9\m\k\9\y\e\f\w\e\q\0\i\i\8\n\w\0\s\u\6\t\p\7\i\c\c\e\j\h\i\c\m\6\4\o\5\3\r\u\1\b\o\x\p\s\o\c\b\2\v\9\u\l\a\c\c\i\y\7\2\d\h\t\r\8\x\s\9\t\n\u\c\o\n\4\l\r\z\d\1\7\m\s\m\4\f\x\o\5\d\l\2\d\o\d\m\m\b\i\9\5\q\r\v\t\9\1\e\e\v\1\z\h\8\8\t\4\n\v\g\j\y\0\w\u\0\i\a\8\z\q\m\3\z\s\0\k\z\z\8\3\h\3\t\q\5\f\j\k\c\e\2\w\s\z\r\o\y\j\x\d\7\5\s\5\q\u\8\6\c\v\y\i\n\e\j\9\h\k\s\b\j\5\a\s\w\2\c\4\e\h\y\n\6\h\0\r\a\2\y\j\k\8\y\u\e\p\g ]] 00:08:27.267 00:08:27.267 real 0m3.812s 00:08:27.267 user 0m1.935s 00:08:27.267 sys 0m1.844s 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.267 ************************************ 00:08:27.267 END TEST dd_flags_misc 00:08:27.267 ************************************ 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:27.267 * Second test run, disabling liburing, forcing AIO 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:27.267 ************************************ 00:08:27.267 START TEST dd_flag_append_forced_aio 00:08:27.267 ************************************ 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=rjgfjbtj8i7hoxs09ywip7ztbrd2hsb8 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=k7py0mrzi8ndcpfi4ttpsqoi8vom7xr3 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s rjgfjbtj8i7hoxs09ywip7ztbrd2hsb8 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s k7py0mrzi8ndcpfi4ttpsqoi8vom7xr3 00:08:27.267 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:27.267 [2024-07-23 22:09:59.390650] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:27.267 [2024-07-23 22:09:59.390724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76726 ] 00:08:27.525 [2024-07-23 22:09:59.507687] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:27.525 [2024-07-23 22:09:59.521582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.525 [2024-07-23 22:09:59.571151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.525 [2024-07-23 22:09:59.612641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:27.783  Copying: 32/32 [B] (average 31 kBps) 00:08:27.783 00:08:27.783 ************************************ 00:08:27.783 END TEST dd_flag_append_forced_aio 00:08:27.783 ************************************ 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ k7py0mrzi8ndcpfi4ttpsqoi8vom7xr3rjgfjbtj8i7hoxs09ywip7ztbrd2hsb8 == \k\7\p\y\0\m\r\z\i\8\n\d\c\p\f\i\4\t\t\p\s\q\o\i\8\v\o\m\7\x\r\3\r\j\g\f\j\b\t\j\8\i\7\h\o\x\s\0\9\y\w\i\p\7\z\t\b\r\d\2\h\s\b\8 ]] 00:08:27.783 00:08:27.783 real 0m0.482s 00:08:27.783 user 0m0.241s 00:08:27.783 sys 0m0.121s 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:27.783 ************************************ 00:08:27.783 START TEST dd_flag_directory_forced_aio 00:08:27.783 ************************************ 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.783 22:09:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.783 [2024-07-23 22:09:59.938946] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:27.783 [2024-07-23 22:09:59.939039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76747 ] 00:08:28.041 [2024-07-23 22:10:00.065070] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:28.041 [2024-07-23 22:10:00.079996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.041 [2024-07-23 22:10:00.128776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.041 [2024-07-23 22:10:00.170278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:28.041 [2024-07-23 22:10:00.192068] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:28.041 [2024-07-23 22:10:00.192113] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:28.041 [2024-07-23 22:10:00.192125] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.299 [2024-07-23 22:10:00.281911] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.299 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:28.299 [2024-07-23 22:10:00.424963] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:28.299 [2024-07-23 22:10:00.425340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76762 ] 00:08:28.557 [2024-07-23 22:10:00.551697] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:28.557 [2024-07-23 22:10:00.568315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.557 [2024-07-23 22:10:00.616970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.557 [2024-07-23 22:10:00.657919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:28.557 [2024-07-23 22:10:00.679136] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:28.557 [2024-07-23 22:10:00.679459] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:28.557 [2024-07-23 22:10:00.679602] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.815 [2024-07-23 22:10:00.769066] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:28.815 00:08:28.815 real 0m0.968s 00:08:28.815 user 0m0.491s 00:08:28.815 sys 0m0.264s 00:08:28.815 ************************************ 00:08:28.815 END TEST dd_flag_directory_forced_aio 00:08:28.815 ************************************ 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:28.815 ************************************ 00:08:28.815 START TEST dd_flag_nofollow_forced_aio 00:08:28.815 ************************************ 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:28.815 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:28.816 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:28.816 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:28.816 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:08:28.816 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:28.816 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.816 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.816 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.816 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.816 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.816 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.816 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.816 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.816 22:10:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:28.816 [2024-07-23 22:10:00.977886] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:28.816 [2024-07-23 22:10:00.977989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76785 ] 00:08:29.073 [2024-07-23 22:10:01.104020] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:29.073 [2024-07-23 22:10:01.119445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.073 [2024-07-23 22:10:01.168339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.073 [2024-07-23 22:10:01.209548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.073 [2024-07-23 22:10:01.230484] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:29.073 [2024-07-23 22:10:01.230536] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:29.073 [2024-07-23 22:10:01.230550] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.332 [2024-07-23 22:10:01.319665] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.332 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:29.332 [2024-07-23 22:10:01.463686] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:29.332 [2024-07-23 22:10:01.463789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76800 ] 00:08:29.589 [2024-07-23 22:10:01.590063] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:29.589 [2024-07-23 22:10:01.606712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.589 [2024-07-23 22:10:01.655396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.589 [2024-07-23 22:10:01.696390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.589 [2024-07-23 22:10:01.717406] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:29.589 [2024-07-23 22:10:01.717456] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:29.589 [2024-07-23 22:10:01.717469] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.846 [2024-07-23 22:10:01.806774] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:29.846 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:08:29.846 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:29.847 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:08:29.847 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:08:29.847 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:08:29.847 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:29.847 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:29.847 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:29.847 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:29.847 22:10:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:29.847 [2024-07-23 22:10:01.955518] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:29.847 [2024-07-23 22:10:01.955904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76802 ] 00:08:30.105 [2024-07-23 22:10:02.082116] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:30.105 [2024-07-23 22:10:02.098603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.105 [2024-07-23 22:10:02.148044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.105 [2024-07-23 22:10:02.189239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:30.363  Copying: 512/512 [B] (average 500 kBps) 00:08:30.363 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 04zzpjxcxjpqx85qatu79l5b5zxw32rx9fbtjocm8i37u5omh6a6iwapj922nip65k97kvm0w3y3m0qihq9xf4j5wb0tods606fjdaog99jz0mz53q3t4ipgd3tmwx8umce3qko3bc3eo3b1vv1v6niug5cchxgumju9l5phv8ftz1akg7bkhzxhfpdkbxz32wk871vi97k7z34z3nmjd9xzqt0z59wspigvqckzcbm76f5nc92z09e1x9b66uz4mcwq7tt38lhlmn8k4a76jpnul0rynb3wgx7ozzair5ok8mp1t235u2w0y59o933a9lh8xf7ew4k2h1wzrkfgw5s412sj1e6e0of2mn5aa8eaxphf71um00j1e4ma8u5apsyyh1amo51wsefibpqrpxejmk03olpp7xj56g57t4o9yrm0a8ihcnjlnw1lx0eezncw19ftlvjm9btvjjraq571jq8tpao3eeasz4fone4pwlp49lrcrs48o7wrutsm == \0\4\z\z\p\j\x\c\x\j\p\q\x\8\5\q\a\t\u\7\9\l\5\b\5\z\x\w\3\2\r\x\9\f\b\t\j\o\c\m\8\i\3\7\u\5\o\m\h\6\a\6\i\w\a\p\j\9\2\2\n\i\p\6\5\k\9\7\k\v\m\0\w\3\y\3\m\0\q\i\h\q\9\x\f\4\j\5\w\b\0\t\o\d\s\6\0\6\f\j\d\a\o\g\9\9\j\z\0\m\z\5\3\q\3\t\4\i\p\g\d\3\t\m\w\x\8\u\m\c\e\3\q\k\o\3\b\c\3\e\o\3\b\1\v\v\1\v\6\n\i\u\g\5\c\c\h\x\g\u\m\j\u\9\l\5\p\h\v\8\f\t\z\1\a\k\g\7\b\k\h\z\x\h\f\p\d\k\b\x\z\3\2\w\k\8\7\1\v\i\9\7\k\7\z\3\4\z\3\n\m\j\d\9\x\z\q\t\0\z\5\9\w\s\p\i\g\v\q\c\k\z\c\b\m\7\6\f\5\n\c\9\2\z\0\9\e\1\x\9\b\6\6\u\z\4\m\c\w\q\7\t\t\3\8\l\h\l\m\n\8\k\4\a\7\6\j\p\n\u\l\0\r\y\n\b\3\w\g\x\7\o\z\z\a\i\r\5\o\k\8\m\p\1\t\2\3\5\u\2\w\0\y\5\9\o\9\3\3\a\9\l\h\8\x\f\7\e\w\4\k\2\h\1\w\z\r\k\f\g\w\5\s\4\1\2\s\j\1\e\6\e\0\o\f\2\m\n\5\a\a\8\e\a\x\p\h\f\7\1\u\m\0\0\j\1\e\4\m\a\8\u\5\a\p\s\y\y\h\1\a\m\o\5\1\w\s\e\f\i\b\p\q\r\p\x\e\j\m\k\0\3\o\l\p\p\7\x\j\5\6\g\5\7\t\4\o\9\y\r\m\0\a\8\i\h\c\n\j\l\n\w\1\l\x\0\e\e\z\n\c\w\1\9\f\t\l\v\j\m\9\b\t\v\j\j\r\a\q\5\7\1\j\q\8\t\p\a\o\3\e\e\a\s\z\4\f\o\n\e\4\p\w\l\p\4\9\l\r\c\r\s\4\8\o\7\w\r\u\t\s\m ]] 00:08:30.363 00:08:30.363 real 0m1.482s 00:08:30.363 user 0m0.764s 00:08:30.363 sys 0m0.388s 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.363 ************************************ 00:08:30.363 END TEST dd_flag_nofollow_forced_aio 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:30.363 ************************************ 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:30.363 ************************************ 00:08:30.363 START TEST dd_flag_noatime_forced_aio 00:08:30.363 ************************************ 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721772602 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721772602 00:08:30.363 22:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:31.321 22:10:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.578 [2024-07-23 22:10:03.541635] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:31.578 [2024-07-23 22:10:03.541747] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76848 ] 00:08:31.578 [2024-07-23 22:10:03.668096] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.579 [2024-07-23 22:10:03.688418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.579 [2024-07-23 22:10:03.744847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.837 [2024-07-23 22:10:03.792006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:31.837  Copying: 512/512 [B] (average 500 kBps) 00:08:31.837 00:08:31.837 22:10:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.837 22:10:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721772602 )) 00:08:31.837 22:10:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.837 22:10:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721772602 )) 00:08:31.837 22:10:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:32.095 [2024-07-23 22:10:04.063358] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:32.095 [2024-07-23 22:10:04.063458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76854 ] 00:08:32.095 [2024-07-23 22:10:04.189916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:32.095 [2024-07-23 22:10:04.207983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.095 [2024-07-23 22:10:04.256945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.353 [2024-07-23 22:10:04.298192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:32.354  Copying: 512/512 [B] (average 500 kBps) 00:08:32.354 00:08:32.354 22:10:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:32.354 22:10:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721772604 )) 00:08:32.354 00:08:32.354 real 0m2.050s 00:08:32.354 user 0m0.514s 00:08:32.354 sys 0m0.297s 00:08:32.354 22:10:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.354 22:10:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:32.354 ************************************ 00:08:32.354 END TEST dd_flag_noatime_forced_aio 00:08:32.354 ************************************ 00:08:32.612 22:10:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:32.612 22:10:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:32.612 22:10:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.612 22:10:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:32.613 ************************************ 00:08:32.613 START TEST dd_flags_misc_forced_aio 00:08:32.613 ************************************ 00:08:32.613 22:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:08:32.613 22:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:32.613 22:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:32.613 22:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:32.613 22:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:32.613 22:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:32.613 22:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:32.613 22:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:32.613 22:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:32.613 22:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:32.613 [2024-07-23 22:10:04.634971] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:32.613 [2024-07-23 22:10:04.635069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76886 ] 00:08:32.613 [2024-07-23 22:10:04.761987] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:32.613 [2024-07-23 22:10:04.781344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.871 [2024-07-23 22:10:04.829860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.871 [2024-07-23 22:10:04.870603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:33.131  Copying: 512/512 [B] (average 500 kBps) 00:08:33.131 00:08:33.131 22:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ augqahw5y595gibbh6825w2jg927xcnw27en0lo9qqfmw7so8z7qmqw3njth2zhtf6pqwtutqdrlmxo8l8n6shre50w0m5sejnsl4493zawqja62nq4ldxvf7q8e4nbfdw6hgbum3g2i2fyfk60u72rtao1z1tpt6xb66jqt6z4yzpbfdagb3bjol0ecd8o5rnmebg0kuqycbch32ma6fgi5qfi9tcjf3l6yv24st7gf9nhgo0le3s8bjw5q1ta7yh927ppbc7ita6vqrw4ie5h92htid18odqrw0pkvnv9irukgxiy2c3oyatyb52ay8gqwg0exb7g0hl43anmzepw89atjhl33gv61b2nchajzuqqm06bvxnth1n4kfpq0pk5xqvoyk882gzkobg1rtc99r5cy2ze16tkyglom6rccikfeu8u9nqhgc4emouuy06ldi1g67rlnnj4dzbps7cmtbxdxsfx6j5j0x2o59uctsok69e1ol3igc0dxj3ef == \a\u\g\q\a\h\w\5\y\5\9\5\g\i\b\b\h\6\8\2\5\w\2\j\g\9\2\7\x\c\n\w\2\7\e\n\0\l\o\9\q\q\f\m\w\7\s\o\8\z\7\q\m\q\w\3\n\j\t\h\2\z\h\t\f\6\p\q\w\t\u\t\q\d\r\l\m\x\o\8\l\8\n\6\s\h\r\e\5\0\w\0\m\5\s\e\j\n\s\l\4\4\9\3\z\a\w\q\j\a\6\2\n\q\4\l\d\x\v\f\7\q\8\e\4\n\b\f\d\w\6\h\g\b\u\m\3\g\2\i\2\f\y\f\k\6\0\u\7\2\r\t\a\o\1\z\1\t\p\t\6\x\b\6\6\j\q\t\6\z\4\y\z\p\b\f\d\a\g\b\3\b\j\o\l\0\e\c\d\8\o\5\r\n\m\e\b\g\0\k\u\q\y\c\b\c\h\3\2\m\a\6\f\g\i\5\q\f\i\9\t\c\j\f\3\l\6\y\v\2\4\s\t\7\g\f\9\n\h\g\o\0\l\e\3\s\8\b\j\w\5\q\1\t\a\7\y\h\9\2\7\p\p\b\c\7\i\t\a\6\v\q\r\w\4\i\e\5\h\9\2\h\t\i\d\1\8\o\d\q\r\w\0\p\k\v\n\v\9\i\r\u\k\g\x\i\y\2\c\3\o\y\a\t\y\b\5\2\a\y\8\g\q\w\g\0\e\x\b\7\g\0\h\l\4\3\a\n\m\z\e\p\w\8\9\a\t\j\h\l\3\3\g\v\6\1\b\2\n\c\h\a\j\z\u\q\q\m\0\6\b\v\x\n\t\h\1\n\4\k\f\p\q\0\p\k\5\x\q\v\o\y\k\8\8\2\g\z\k\o\b\g\1\r\t\c\9\9\r\5\c\y\2\z\e\1\6\t\k\y\g\l\o\m\6\r\c\c\i\k\f\e\u\8\u\9\n\q\h\g\c\4\e\m\o\u\u\y\0\6\l\d\i\1\g\6\7\r\l\n\n\j\4\d\z\b\p\s\7\c\m\t\b\x\d\x\s\f\x\6\j\5\j\0\x\2\o\5\9\u\c\t\s\o\k\6\9\e\1\o\l\3\i\g\c\0\d\x\j\3\e\f ]] 00:08:33.131 22:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:33.131 22:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:33.131 [2024-07-23 22:10:05.130970] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:33.131 [2024-07-23 22:10:05.131078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76888 ] 00:08:33.131 [2024-07-23 22:10:05.258306] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:33.131 [2024-07-23 22:10:05.274927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.131 [2024-07-23 22:10:05.323839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.389 [2024-07-23 22:10:05.365083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:33.389  Copying: 512/512 [B] (average 500 kBps) 00:08:33.389 00:08:33.389 22:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ augqahw5y595gibbh6825w2jg927xcnw27en0lo9qqfmw7so8z7qmqw3njth2zhtf6pqwtutqdrlmxo8l8n6shre50w0m5sejnsl4493zawqja62nq4ldxvf7q8e4nbfdw6hgbum3g2i2fyfk60u72rtao1z1tpt6xb66jqt6z4yzpbfdagb3bjol0ecd8o5rnmebg0kuqycbch32ma6fgi5qfi9tcjf3l6yv24st7gf9nhgo0le3s8bjw5q1ta7yh927ppbc7ita6vqrw4ie5h92htid18odqrw0pkvnv9irukgxiy2c3oyatyb52ay8gqwg0exb7g0hl43anmzepw89atjhl33gv61b2nchajzuqqm06bvxnth1n4kfpq0pk5xqvoyk882gzkobg1rtc99r5cy2ze16tkyglom6rccikfeu8u9nqhgc4emouuy06ldi1g67rlnnj4dzbps7cmtbxdxsfx6j5j0x2o59uctsok69e1ol3igc0dxj3ef == \a\u\g\q\a\h\w\5\y\5\9\5\g\i\b\b\h\6\8\2\5\w\2\j\g\9\2\7\x\c\n\w\2\7\e\n\0\l\o\9\q\q\f\m\w\7\s\o\8\z\7\q\m\q\w\3\n\j\t\h\2\z\h\t\f\6\p\q\w\t\u\t\q\d\r\l\m\x\o\8\l\8\n\6\s\h\r\e\5\0\w\0\m\5\s\e\j\n\s\l\4\4\9\3\z\a\w\q\j\a\6\2\n\q\4\l\d\x\v\f\7\q\8\e\4\n\b\f\d\w\6\h\g\b\u\m\3\g\2\i\2\f\y\f\k\6\0\u\7\2\r\t\a\o\1\z\1\t\p\t\6\x\b\6\6\j\q\t\6\z\4\y\z\p\b\f\d\a\g\b\3\b\j\o\l\0\e\c\d\8\o\5\r\n\m\e\b\g\0\k\u\q\y\c\b\c\h\3\2\m\a\6\f\g\i\5\q\f\i\9\t\c\j\f\3\l\6\y\v\2\4\s\t\7\g\f\9\n\h\g\o\0\l\e\3\s\8\b\j\w\5\q\1\t\a\7\y\h\9\2\7\p\p\b\c\7\i\t\a\6\v\q\r\w\4\i\e\5\h\9\2\h\t\i\d\1\8\o\d\q\r\w\0\p\k\v\n\v\9\i\r\u\k\g\x\i\y\2\c\3\o\y\a\t\y\b\5\2\a\y\8\g\q\w\g\0\e\x\b\7\g\0\h\l\4\3\a\n\m\z\e\p\w\8\9\a\t\j\h\l\3\3\g\v\6\1\b\2\n\c\h\a\j\z\u\q\q\m\0\6\b\v\x\n\t\h\1\n\4\k\f\p\q\0\p\k\5\x\q\v\o\y\k\8\8\2\g\z\k\o\b\g\1\r\t\c\9\9\r\5\c\y\2\z\e\1\6\t\k\y\g\l\o\m\6\r\c\c\i\k\f\e\u\8\u\9\n\q\h\g\c\4\e\m\o\u\u\y\0\6\l\d\i\1\g\6\7\r\l\n\n\j\4\d\z\b\p\s\7\c\m\t\b\x\d\x\s\f\x\6\j\5\j\0\x\2\o\5\9\u\c\t\s\o\k\6\9\e\1\o\l\3\i\g\c\0\d\x\j\3\e\f ]] 00:08:33.389 22:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:33.389 22:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:33.648 [2024-07-23 22:10:05.624768] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:33.648 [2024-07-23 22:10:05.624885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76901 ] 00:08:33.648 [2024-07-23 22:10:05.751179] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:33.648 [2024-07-23 22:10:05.769170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.648 [2024-07-23 22:10:05.818121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.906 [2024-07-23 22:10:05.858651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:33.906  Copying: 512/512 [B] (average 250 kBps) 00:08:33.906 00:08:33.906 22:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ augqahw5y595gibbh6825w2jg927xcnw27en0lo9qqfmw7so8z7qmqw3njth2zhtf6pqwtutqdrlmxo8l8n6shre50w0m5sejnsl4493zawqja62nq4ldxvf7q8e4nbfdw6hgbum3g2i2fyfk60u72rtao1z1tpt6xb66jqt6z4yzpbfdagb3bjol0ecd8o5rnmebg0kuqycbch32ma6fgi5qfi9tcjf3l6yv24st7gf9nhgo0le3s8bjw5q1ta7yh927ppbc7ita6vqrw4ie5h92htid18odqrw0pkvnv9irukgxiy2c3oyatyb52ay8gqwg0exb7g0hl43anmzepw89atjhl33gv61b2nchajzuqqm06bvxnth1n4kfpq0pk5xqvoyk882gzkobg1rtc99r5cy2ze16tkyglom6rccikfeu8u9nqhgc4emouuy06ldi1g67rlnnj4dzbps7cmtbxdxsfx6j5j0x2o59uctsok69e1ol3igc0dxj3ef == \a\u\g\q\a\h\w\5\y\5\9\5\g\i\b\b\h\6\8\2\5\w\2\j\g\9\2\7\x\c\n\w\2\7\e\n\0\l\o\9\q\q\f\m\w\7\s\o\8\z\7\q\m\q\w\3\n\j\t\h\2\z\h\t\f\6\p\q\w\t\u\t\q\d\r\l\m\x\o\8\l\8\n\6\s\h\r\e\5\0\w\0\m\5\s\e\j\n\s\l\4\4\9\3\z\a\w\q\j\a\6\2\n\q\4\l\d\x\v\f\7\q\8\e\4\n\b\f\d\w\6\h\g\b\u\m\3\g\2\i\2\f\y\f\k\6\0\u\7\2\r\t\a\o\1\z\1\t\p\t\6\x\b\6\6\j\q\t\6\z\4\y\z\p\b\f\d\a\g\b\3\b\j\o\l\0\e\c\d\8\o\5\r\n\m\e\b\g\0\k\u\q\y\c\b\c\h\3\2\m\a\6\f\g\i\5\q\f\i\9\t\c\j\f\3\l\6\y\v\2\4\s\t\7\g\f\9\n\h\g\o\0\l\e\3\s\8\b\j\w\5\q\1\t\a\7\y\h\9\2\7\p\p\b\c\7\i\t\a\6\v\q\r\w\4\i\e\5\h\9\2\h\t\i\d\1\8\o\d\q\r\w\0\p\k\v\n\v\9\i\r\u\k\g\x\i\y\2\c\3\o\y\a\t\y\b\5\2\a\y\8\g\q\w\g\0\e\x\b\7\g\0\h\l\4\3\a\n\m\z\e\p\w\8\9\a\t\j\h\l\3\3\g\v\6\1\b\2\n\c\h\a\j\z\u\q\q\m\0\6\b\v\x\n\t\h\1\n\4\k\f\p\q\0\p\k\5\x\q\v\o\y\k\8\8\2\g\z\k\o\b\g\1\r\t\c\9\9\r\5\c\y\2\z\e\1\6\t\k\y\g\l\o\m\6\r\c\c\i\k\f\e\u\8\u\9\n\q\h\g\c\4\e\m\o\u\u\y\0\6\l\d\i\1\g\6\7\r\l\n\n\j\4\d\z\b\p\s\7\c\m\t\b\x\d\x\s\f\x\6\j\5\j\0\x\2\o\5\9\u\c\t\s\o\k\6\9\e\1\o\l\3\i\g\c\0\d\x\j\3\e\f ]] 00:08:33.906 22:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:33.906 22:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:34.165 [2024-07-23 22:10:06.120207] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:34.165 [2024-07-23 22:10:06.120313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76903 ] 00:08:34.165 [2024-07-23 22:10:06.246296] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:34.165 [2024-07-23 22:10:06.261204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.165 [2024-07-23 22:10:06.309807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.165 [2024-07-23 22:10:06.350444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:34.423  Copying: 512/512 [B] (average 500 kBps) 00:08:34.423 00:08:34.423 22:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ augqahw5y595gibbh6825w2jg927xcnw27en0lo9qqfmw7so8z7qmqw3njth2zhtf6pqwtutqdrlmxo8l8n6shre50w0m5sejnsl4493zawqja62nq4ldxvf7q8e4nbfdw6hgbum3g2i2fyfk60u72rtao1z1tpt6xb66jqt6z4yzpbfdagb3bjol0ecd8o5rnmebg0kuqycbch32ma6fgi5qfi9tcjf3l6yv24st7gf9nhgo0le3s8bjw5q1ta7yh927ppbc7ita6vqrw4ie5h92htid18odqrw0pkvnv9irukgxiy2c3oyatyb52ay8gqwg0exb7g0hl43anmzepw89atjhl33gv61b2nchajzuqqm06bvxnth1n4kfpq0pk5xqvoyk882gzkobg1rtc99r5cy2ze16tkyglom6rccikfeu8u9nqhgc4emouuy06ldi1g67rlnnj4dzbps7cmtbxdxsfx6j5j0x2o59uctsok69e1ol3igc0dxj3ef == \a\u\g\q\a\h\w\5\y\5\9\5\g\i\b\b\h\6\8\2\5\w\2\j\g\9\2\7\x\c\n\w\2\7\e\n\0\l\o\9\q\q\f\m\w\7\s\o\8\z\7\q\m\q\w\3\n\j\t\h\2\z\h\t\f\6\p\q\w\t\u\t\q\d\r\l\m\x\o\8\l\8\n\6\s\h\r\e\5\0\w\0\m\5\s\e\j\n\s\l\4\4\9\3\z\a\w\q\j\a\6\2\n\q\4\l\d\x\v\f\7\q\8\e\4\n\b\f\d\w\6\h\g\b\u\m\3\g\2\i\2\f\y\f\k\6\0\u\7\2\r\t\a\o\1\z\1\t\p\t\6\x\b\6\6\j\q\t\6\z\4\y\z\p\b\f\d\a\g\b\3\b\j\o\l\0\e\c\d\8\o\5\r\n\m\e\b\g\0\k\u\q\y\c\b\c\h\3\2\m\a\6\f\g\i\5\q\f\i\9\t\c\j\f\3\l\6\y\v\2\4\s\t\7\g\f\9\n\h\g\o\0\l\e\3\s\8\b\j\w\5\q\1\t\a\7\y\h\9\2\7\p\p\b\c\7\i\t\a\6\v\q\r\w\4\i\e\5\h\9\2\h\t\i\d\1\8\o\d\q\r\w\0\p\k\v\n\v\9\i\r\u\k\g\x\i\y\2\c\3\o\y\a\t\y\b\5\2\a\y\8\g\q\w\g\0\e\x\b\7\g\0\h\l\4\3\a\n\m\z\e\p\w\8\9\a\t\j\h\l\3\3\g\v\6\1\b\2\n\c\h\a\j\z\u\q\q\m\0\6\b\v\x\n\t\h\1\n\4\k\f\p\q\0\p\k\5\x\q\v\o\y\k\8\8\2\g\z\k\o\b\g\1\r\t\c\9\9\r\5\c\y\2\z\e\1\6\t\k\y\g\l\o\m\6\r\c\c\i\k\f\e\u\8\u\9\n\q\h\g\c\4\e\m\o\u\u\y\0\6\l\d\i\1\g\6\7\r\l\n\n\j\4\d\z\b\p\s\7\c\m\t\b\x\d\x\s\f\x\6\j\5\j\0\x\2\o\5\9\u\c\t\s\o\k\6\9\e\1\o\l\3\i\g\c\0\d\x\j\3\e\f ]] 00:08:34.423 22:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:34.423 22:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:34.423 22:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:34.423 22:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:34.423 22:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:34.423 22:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:34.423 [2024-07-23 22:10:06.604499] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:34.423 [2024-07-23 22:10:06.604573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76916 ] 00:08:34.682 [2024-07-23 22:10:06.722076] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:34.682 [2024-07-23 22:10:06.736436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.682 [2024-07-23 22:10:06.784973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.682 [2024-07-23 22:10:06.825674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:34.941  Copying: 512/512 [B] (average 500 kBps) 00:08:34.941 00:08:34.941 22:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8qmcmum8rtqcubwmqdrywovidolfutk7nsqwg5fnwkdq6hrsxwfis76ap2i4ow8gqtmahujssq4el91yeesduuuiljuxb2r8bs0zcstz1o3dqo5x0v20rah13od6hm8cc0cbeij3x36wwh4ybrm4797j64lyh7nwocbgith05p872ezwc5lv5g1m3lebosyanulnrippcqerevn987kn4nt7cwboyik54u548xvt4zqszgosd6ljpovoffoh314ox9dodcemihvw74t09k0uk49k8klego1r8aroif0yugp3htaiiqcv72ww29h4n99d7sbhexp53yo57rg9k4hfpzta5v4a24c6kogdo1tjb6xi6z8dy8dmhv4w327x8bgo3jow5wv5ja3mf6i388i6wiv7r9mvgz02vv3s5ea2pem3jub6h4psa5i9yam3bh2onvpe121w87t82acwkn84blbij4qkqrqhzv6fpeu53trw66r64bfpyah1t6e80i71 == \8\q\m\c\m\u\m\8\r\t\q\c\u\b\w\m\q\d\r\y\w\o\v\i\d\o\l\f\u\t\k\7\n\s\q\w\g\5\f\n\w\k\d\q\6\h\r\s\x\w\f\i\s\7\6\a\p\2\i\4\o\w\8\g\q\t\m\a\h\u\j\s\s\q\4\e\l\9\1\y\e\e\s\d\u\u\u\i\l\j\u\x\b\2\r\8\b\s\0\z\c\s\t\z\1\o\3\d\q\o\5\x\0\v\2\0\r\a\h\1\3\o\d\6\h\m\8\c\c\0\c\b\e\i\j\3\x\3\6\w\w\h\4\y\b\r\m\4\7\9\7\j\6\4\l\y\h\7\n\w\o\c\b\g\i\t\h\0\5\p\8\7\2\e\z\w\c\5\l\v\5\g\1\m\3\l\e\b\o\s\y\a\n\u\l\n\r\i\p\p\c\q\e\r\e\v\n\9\8\7\k\n\4\n\t\7\c\w\b\o\y\i\k\5\4\u\5\4\8\x\v\t\4\z\q\s\z\g\o\s\d\6\l\j\p\o\v\o\f\f\o\h\3\1\4\o\x\9\d\o\d\c\e\m\i\h\v\w\7\4\t\0\9\k\0\u\k\4\9\k\8\k\l\e\g\o\1\r\8\a\r\o\i\f\0\y\u\g\p\3\h\t\a\i\i\q\c\v\7\2\w\w\2\9\h\4\n\9\9\d\7\s\b\h\e\x\p\5\3\y\o\5\7\r\g\9\k\4\h\f\p\z\t\a\5\v\4\a\2\4\c\6\k\o\g\d\o\1\t\j\b\6\x\i\6\z\8\d\y\8\d\m\h\v\4\w\3\2\7\x\8\b\g\o\3\j\o\w\5\w\v\5\j\a\3\m\f\6\i\3\8\8\i\6\w\i\v\7\r\9\m\v\g\z\0\2\v\v\3\s\5\e\a\2\p\e\m\3\j\u\b\6\h\4\p\s\a\5\i\9\y\a\m\3\b\h\2\o\n\v\p\e\1\2\1\w\8\7\t\8\2\a\c\w\k\n\8\4\b\l\b\i\j\4\q\k\q\r\q\h\z\v\6\f\p\e\u\5\3\t\r\w\6\6\r\6\4\b\f\p\y\a\h\1\t\6\e\8\0\i\7\1 ]] 00:08:34.941 22:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:34.941 22:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:34.941 [2024-07-23 22:10:07.085659] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:34.941 [2024-07-23 22:10:07.085769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76918 ] 00:08:35.200 [2024-07-23 22:10:07.211602] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:35.200 [2024-07-23 22:10:07.227736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.200 [2024-07-23 22:10:07.276471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.200 [2024-07-23 22:10:07.317120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:35.459  Copying: 512/512 [B] (average 500 kBps) 00:08:35.459 00:08:35.459 22:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8qmcmum8rtqcubwmqdrywovidolfutk7nsqwg5fnwkdq6hrsxwfis76ap2i4ow8gqtmahujssq4el91yeesduuuiljuxb2r8bs0zcstz1o3dqo5x0v20rah13od6hm8cc0cbeij3x36wwh4ybrm4797j64lyh7nwocbgith05p872ezwc5lv5g1m3lebosyanulnrippcqerevn987kn4nt7cwboyik54u548xvt4zqszgosd6ljpovoffoh314ox9dodcemihvw74t09k0uk49k8klego1r8aroif0yugp3htaiiqcv72ww29h4n99d7sbhexp53yo57rg9k4hfpzta5v4a24c6kogdo1tjb6xi6z8dy8dmhv4w327x8bgo3jow5wv5ja3mf6i388i6wiv7r9mvgz02vv3s5ea2pem3jub6h4psa5i9yam3bh2onvpe121w87t82acwkn84blbij4qkqrqhzv6fpeu53trw66r64bfpyah1t6e80i71 == \8\q\m\c\m\u\m\8\r\t\q\c\u\b\w\m\q\d\r\y\w\o\v\i\d\o\l\f\u\t\k\7\n\s\q\w\g\5\f\n\w\k\d\q\6\h\r\s\x\w\f\i\s\7\6\a\p\2\i\4\o\w\8\g\q\t\m\a\h\u\j\s\s\q\4\e\l\9\1\y\e\e\s\d\u\u\u\i\l\j\u\x\b\2\r\8\b\s\0\z\c\s\t\z\1\o\3\d\q\o\5\x\0\v\2\0\r\a\h\1\3\o\d\6\h\m\8\c\c\0\c\b\e\i\j\3\x\3\6\w\w\h\4\y\b\r\m\4\7\9\7\j\6\4\l\y\h\7\n\w\o\c\b\g\i\t\h\0\5\p\8\7\2\e\z\w\c\5\l\v\5\g\1\m\3\l\e\b\o\s\y\a\n\u\l\n\r\i\p\p\c\q\e\r\e\v\n\9\8\7\k\n\4\n\t\7\c\w\b\o\y\i\k\5\4\u\5\4\8\x\v\t\4\z\q\s\z\g\o\s\d\6\l\j\p\o\v\o\f\f\o\h\3\1\4\o\x\9\d\o\d\c\e\m\i\h\v\w\7\4\t\0\9\k\0\u\k\4\9\k\8\k\l\e\g\o\1\r\8\a\r\o\i\f\0\y\u\g\p\3\h\t\a\i\i\q\c\v\7\2\w\w\2\9\h\4\n\9\9\d\7\s\b\h\e\x\p\5\3\y\o\5\7\r\g\9\k\4\h\f\p\z\t\a\5\v\4\a\2\4\c\6\k\o\g\d\o\1\t\j\b\6\x\i\6\z\8\d\y\8\d\m\h\v\4\w\3\2\7\x\8\b\g\o\3\j\o\w\5\w\v\5\j\a\3\m\f\6\i\3\8\8\i\6\w\i\v\7\r\9\m\v\g\z\0\2\v\v\3\s\5\e\a\2\p\e\m\3\j\u\b\6\h\4\p\s\a\5\i\9\y\a\m\3\b\h\2\o\n\v\p\e\1\2\1\w\8\7\t\8\2\a\c\w\k\n\8\4\b\l\b\i\j\4\q\k\q\r\q\h\z\v\6\f\p\e\u\5\3\t\r\w\6\6\r\6\4\b\f\p\y\a\h\1\t\6\e\8\0\i\7\1 ]] 00:08:35.459 22:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:35.459 22:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:35.459 [2024-07-23 22:10:07.568020] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:35.459 [2024-07-23 22:10:07.568109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76931 ] 00:08:35.717 [2024-07-23 22:10:07.684929] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:35.717 [2024-07-23 22:10:07.698192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.717 [2024-07-23 22:10:07.746998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.717 [2024-07-23 22:10:07.787949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:35.976  Copying: 512/512 [B] (average 500 kBps) 00:08:35.976 00:08:35.977 22:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8qmcmum8rtqcubwmqdrywovidolfutk7nsqwg5fnwkdq6hrsxwfis76ap2i4ow8gqtmahujssq4el91yeesduuuiljuxb2r8bs0zcstz1o3dqo5x0v20rah13od6hm8cc0cbeij3x36wwh4ybrm4797j64lyh7nwocbgith05p872ezwc5lv5g1m3lebosyanulnrippcqerevn987kn4nt7cwboyik54u548xvt4zqszgosd6ljpovoffoh314ox9dodcemihvw74t09k0uk49k8klego1r8aroif0yugp3htaiiqcv72ww29h4n99d7sbhexp53yo57rg9k4hfpzta5v4a24c6kogdo1tjb6xi6z8dy8dmhv4w327x8bgo3jow5wv5ja3mf6i388i6wiv7r9mvgz02vv3s5ea2pem3jub6h4psa5i9yam3bh2onvpe121w87t82acwkn84blbij4qkqrqhzv6fpeu53trw66r64bfpyah1t6e80i71 == \8\q\m\c\m\u\m\8\r\t\q\c\u\b\w\m\q\d\r\y\w\o\v\i\d\o\l\f\u\t\k\7\n\s\q\w\g\5\f\n\w\k\d\q\6\h\r\s\x\w\f\i\s\7\6\a\p\2\i\4\o\w\8\g\q\t\m\a\h\u\j\s\s\q\4\e\l\9\1\y\e\e\s\d\u\u\u\i\l\j\u\x\b\2\r\8\b\s\0\z\c\s\t\z\1\o\3\d\q\o\5\x\0\v\2\0\r\a\h\1\3\o\d\6\h\m\8\c\c\0\c\b\e\i\j\3\x\3\6\w\w\h\4\y\b\r\m\4\7\9\7\j\6\4\l\y\h\7\n\w\o\c\b\g\i\t\h\0\5\p\8\7\2\e\z\w\c\5\l\v\5\g\1\m\3\l\e\b\o\s\y\a\n\u\l\n\r\i\p\p\c\q\e\r\e\v\n\9\8\7\k\n\4\n\t\7\c\w\b\o\y\i\k\5\4\u\5\4\8\x\v\t\4\z\q\s\z\g\o\s\d\6\l\j\p\o\v\o\f\f\o\h\3\1\4\o\x\9\d\o\d\c\e\m\i\h\v\w\7\4\t\0\9\k\0\u\k\4\9\k\8\k\l\e\g\o\1\r\8\a\r\o\i\f\0\y\u\g\p\3\h\t\a\i\i\q\c\v\7\2\w\w\2\9\h\4\n\9\9\d\7\s\b\h\e\x\p\5\3\y\o\5\7\r\g\9\k\4\h\f\p\z\t\a\5\v\4\a\2\4\c\6\k\o\g\d\o\1\t\j\b\6\x\i\6\z\8\d\y\8\d\m\h\v\4\w\3\2\7\x\8\b\g\o\3\j\o\w\5\w\v\5\j\a\3\m\f\6\i\3\8\8\i\6\w\i\v\7\r\9\m\v\g\z\0\2\v\v\3\s\5\e\a\2\p\e\m\3\j\u\b\6\h\4\p\s\a\5\i\9\y\a\m\3\b\h\2\o\n\v\p\e\1\2\1\w\8\7\t\8\2\a\c\w\k\n\8\4\b\l\b\i\j\4\q\k\q\r\q\h\z\v\6\f\p\e\u\5\3\t\r\w\6\6\r\6\4\b\f\p\y\a\h\1\t\6\e\8\0\i\7\1 ]] 00:08:35.977 22:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:35.977 22:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:35.977 [2024-07-23 22:10:08.049983] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:35.977 [2024-07-23 22:10:08.050092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76933 ] 00:08:36.235 [2024-07-23 22:10:08.176050] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:36.235 [2024-07-23 22:10:08.191382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.235 [2024-07-23 22:10:08.240459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.235 [2024-07-23 22:10:08.281441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:36.495  Copying: 512/512 [B] (average 500 kBps) 00:08:36.495 00:08:36.495 22:10:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8qmcmum8rtqcubwmqdrywovidolfutk7nsqwg5fnwkdq6hrsxwfis76ap2i4ow8gqtmahujssq4el91yeesduuuiljuxb2r8bs0zcstz1o3dqo5x0v20rah13od6hm8cc0cbeij3x36wwh4ybrm4797j64lyh7nwocbgith05p872ezwc5lv5g1m3lebosyanulnrippcqerevn987kn4nt7cwboyik54u548xvt4zqszgosd6ljpovoffoh314ox9dodcemihvw74t09k0uk49k8klego1r8aroif0yugp3htaiiqcv72ww29h4n99d7sbhexp53yo57rg9k4hfpzta5v4a24c6kogdo1tjb6xi6z8dy8dmhv4w327x8bgo3jow5wv5ja3mf6i388i6wiv7r9mvgz02vv3s5ea2pem3jub6h4psa5i9yam3bh2onvpe121w87t82acwkn84blbij4qkqrqhzv6fpeu53trw66r64bfpyah1t6e80i71 == \8\q\m\c\m\u\m\8\r\t\q\c\u\b\w\m\q\d\r\y\w\o\v\i\d\o\l\f\u\t\k\7\n\s\q\w\g\5\f\n\w\k\d\q\6\h\r\s\x\w\f\i\s\7\6\a\p\2\i\4\o\w\8\g\q\t\m\a\h\u\j\s\s\q\4\e\l\9\1\y\e\e\s\d\u\u\u\i\l\j\u\x\b\2\r\8\b\s\0\z\c\s\t\z\1\o\3\d\q\o\5\x\0\v\2\0\r\a\h\1\3\o\d\6\h\m\8\c\c\0\c\b\e\i\j\3\x\3\6\w\w\h\4\y\b\r\m\4\7\9\7\j\6\4\l\y\h\7\n\w\o\c\b\g\i\t\h\0\5\p\8\7\2\e\z\w\c\5\l\v\5\g\1\m\3\l\e\b\o\s\y\a\n\u\l\n\r\i\p\p\c\q\e\r\e\v\n\9\8\7\k\n\4\n\t\7\c\w\b\o\y\i\k\5\4\u\5\4\8\x\v\t\4\z\q\s\z\g\o\s\d\6\l\j\p\o\v\o\f\f\o\h\3\1\4\o\x\9\d\o\d\c\e\m\i\h\v\w\7\4\t\0\9\k\0\u\k\4\9\k\8\k\l\e\g\o\1\r\8\a\r\o\i\f\0\y\u\g\p\3\h\t\a\i\i\q\c\v\7\2\w\w\2\9\h\4\n\9\9\d\7\s\b\h\e\x\p\5\3\y\o\5\7\r\g\9\k\4\h\f\p\z\t\a\5\v\4\a\2\4\c\6\k\o\g\d\o\1\t\j\b\6\x\i\6\z\8\d\y\8\d\m\h\v\4\w\3\2\7\x\8\b\g\o\3\j\o\w\5\w\v\5\j\a\3\m\f\6\i\3\8\8\i\6\w\i\v\7\r\9\m\v\g\z\0\2\v\v\3\s\5\e\a\2\p\e\m\3\j\u\b\6\h\4\p\s\a\5\i\9\y\a\m\3\b\h\2\o\n\v\p\e\1\2\1\w\8\7\t\8\2\a\c\w\k\n\8\4\b\l\b\i\j\4\q\k\q\r\q\h\z\v\6\f\p\e\u\5\3\t\r\w\6\6\r\6\4\b\f\p\y\a\h\1\t\6\e\8\0\i\7\1 ]] 00:08:36.495 00:08:36.495 real 0m3.912s 00:08:36.495 user 0m1.917s 00:08:36.495 sys 0m1.027s 00:08:36.495 22:10:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.495 ************************************ 00:08:36.495 END TEST dd_flags_misc_forced_aio 00:08:36.495 ************************************ 00:08:36.495 22:10:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:36.495 22:10:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:36.495 22:10:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:36.495 22:10:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:36.495 00:08:36.495 real 0m18.367s 00:08:36.495 user 0m8.095s 00:08:36.495 sys 0m5.903s 00:08:36.495 22:10:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.495 22:10:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:36.495 ************************************ 00:08:36.495 END TEST spdk_dd_posix 00:08:36.495 ************************************ 00:08:36.495 22:10:08 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:36.495 22:10:08 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:36.495 22:10:08 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.495 22:10:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:36.495 ************************************ 00:08:36.495 START TEST spdk_dd_malloc 00:08:36.495 ************************************ 00:08:36.495 22:10:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:36.495 * Looking for test storage... 00:08:36.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:36.755 ************************************ 00:08:36.755 START TEST dd_malloc_copy 00:08:36.755 ************************************ 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:36.755 22:10:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:36.755 { 00:08:36.755 "subsystems": [ 00:08:36.755 { 00:08:36.755 "subsystem": "bdev", 00:08:36.755 "config": [ 00:08:36.755 { 00:08:36.755 "params": { 00:08:36.755 "block_size": 512, 00:08:36.755 "num_blocks": 1048576, 00:08:36.755 "name": "malloc0" 00:08:36.755 }, 00:08:36.755 "method": "bdev_malloc_create" 00:08:36.755 }, 00:08:36.755 { 00:08:36.755 "params": { 00:08:36.755 "block_size": 512, 00:08:36.755 "num_blocks": 1048576, 00:08:36.755 "name": "malloc1" 00:08:36.755 }, 00:08:36.755 "method": "bdev_malloc_create" 00:08:36.755 }, 00:08:36.755 { 00:08:36.755 "method": "bdev_wait_for_examine" 00:08:36.755 } 00:08:36.755 ] 00:08:36.755 } 00:08:36.755 ] 00:08:36.755 } 00:08:36.755 [2024-07-23 22:10:08.776961] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:36.755 [2024-07-23 22:10:08.777092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77007 ] 00:08:36.755 [2024-07-23 22:10:08.913067] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:36.755 [2024-07-23 22:10:08.930201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.014 [2024-07-23 22:10:08.978864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.014 [2024-07-23 22:10:09.020036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:39.585  Copying: 262/512 [MB] (262 MBps) Copying: 512/512 [MB] (average 263 MBps) 00:08:39.585 00:08:39.585 22:10:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:39.585 22:10:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:39.585 22:10:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:39.585 22:10:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:39.585 { 00:08:39.585 "subsystems": [ 00:08:39.585 { 00:08:39.585 "subsystem": "bdev", 00:08:39.585 "config": [ 00:08:39.585 { 00:08:39.585 "params": { 00:08:39.585 "block_size": 512, 00:08:39.585 "num_blocks": 1048576, 00:08:39.585 "name": "malloc0" 00:08:39.585 }, 00:08:39.585 "method": "bdev_malloc_create" 00:08:39.585 }, 00:08:39.585 { 00:08:39.585 "params": { 00:08:39.585 "block_size": 512, 00:08:39.585 "num_blocks": 1048576, 00:08:39.585 "name": "malloc1" 00:08:39.585 }, 00:08:39.585 "method": "bdev_malloc_create" 00:08:39.585 }, 00:08:39.585 { 00:08:39.585 "method": "bdev_wait_for_examine" 00:08:39.585 } 00:08:39.585 ] 00:08:39.585 } 00:08:39.585 ] 00:08:39.585 } 00:08:39.585 [2024-07-23 22:10:11.739622] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:39.585 [2024-07-23 22:10:11.739713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77044 ] 00:08:39.843 [2024-07-23 22:10:11.867143] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:39.843 [2024-07-23 22:10:11.883950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.843 [2024-07-23 22:10:11.932642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.843 [2024-07-23 22:10:11.974108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:42.721  Copying: 267/512 [MB] (267 MBps) Copying: 512/512 [MB] (average 266 MBps) 00:08:42.721 00:08:42.721 00:08:42.721 real 0m5.908s 00:08:42.721 user 0m5.056s 00:08:42.721 sys 0m0.697s 00:08:42.721 ************************************ 00:08:42.721 END TEST dd_malloc_copy 00:08:42.721 ************************************ 00:08:42.721 22:10:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.721 22:10:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:42.721 00:08:42.721 real 0m6.067s 00:08:42.721 user 0m5.116s 00:08:42.721 sys 0m0.799s 00:08:42.721 22:10:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.721 22:10:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:42.721 ************************************ 00:08:42.721 END TEST spdk_dd_malloc 00:08:42.721 ************************************ 00:08:42.721 22:10:14 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:42.721 22:10:14 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:42.721 22:10:14 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.721 22:10:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:42.721 ************************************ 00:08:42.721 START TEST spdk_dd_bdev_to_bdev 00:08:42.721 ************************************ 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:42.721 * Looking for test storage... 00:08:42.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:42.721 ************************************ 00:08:42.721 START TEST dd_inflate_file 00:08:42.721 ************************************ 00:08:42.721 22:10:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:42.721 [2024-07-23 22:10:14.888340] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:42.722 [2024-07-23 22:10:14.888434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77143 ] 00:08:42.980 [2024-07-23 22:10:15.014967] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:42.980 [2024-07-23 22:10:15.033008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.980 [2024-07-23 22:10:15.081664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.980 [2024-07-23 22:10:15.122239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:43.262  Copying: 64/64 [MB] (average 1523 MBps) 00:08:43.262 00:08:43.262 00:08:43.262 real 0m0.520s 00:08:43.262 user 0m0.286s 00:08:43.262 sys 0m0.275s 00:08:43.262 22:10:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.262 22:10:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:43.262 ************************************ 00:08:43.262 END TEST dd_inflate_file 00:08:43.262 ************************************ 00:08:43.262 22:10:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:43.262 22:10:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:43.262 22:10:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:43.262 22:10:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:43.262 22:10:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:43.262 22:10:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.262 22:10:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:43.262 22:10:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:43.262 22:10:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:43.262 ************************************ 00:08:43.262 START TEST dd_copy_to_out_bdev 00:08:43.262 ************************************ 00:08:43.262 22:10:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:43.527 { 00:08:43.527 "subsystems": [ 00:08:43.527 { 00:08:43.527 "subsystem": "bdev", 00:08:43.527 "config": [ 00:08:43.527 { 00:08:43.527 "params": { 00:08:43.527 "trtype": "pcie", 00:08:43.527 "traddr": "0000:00:10.0", 00:08:43.527 "name": "Nvme0" 00:08:43.527 }, 00:08:43.527 "method": "bdev_nvme_attach_controller" 00:08:43.527 }, 00:08:43.527 { 00:08:43.527 "params": { 00:08:43.527 "trtype": "pcie", 00:08:43.527 "traddr": "0000:00:11.0", 00:08:43.527 "name": "Nvme1" 00:08:43.527 }, 00:08:43.527 "method": "bdev_nvme_attach_controller" 00:08:43.527 }, 00:08:43.527 { 00:08:43.527 "method": "bdev_wait_for_examine" 00:08:43.527 } 00:08:43.527 ] 00:08:43.527 } 00:08:43.527 ] 00:08:43.527 } 00:08:43.527 [2024-07-23 22:10:15.476564] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:43.527 [2024-07-23 22:10:15.476667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77181 ] 00:08:43.527 [2024-07-23 22:10:15.603381] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:43.527 [2024-07-23 22:10:15.621662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.527 [2024-07-23 22:10:15.670396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.527 [2024-07-23 22:10:15.711937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:45.162  Copying: 61/64 [MB] (61 MBps) Copying: 64/64 [MB] (average 61 MBps) 00:08:45.162 00:08:45.162 00:08:45.162 real 0m1.707s 00:08:45.162 user 0m1.465s 00:08:45.162 sys 0m1.359s 00:08:45.162 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.162 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:45.162 ************************************ 00:08:45.162 END TEST dd_copy_to_out_bdev 00:08:45.162 ************************************ 00:08:45.162 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:45.162 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:45.162 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:45.162 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.162 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:45.162 ************************************ 00:08:45.162 START TEST dd_offset_magic 00:08:45.162 ************************************ 00:08:45.162 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:08:45.162 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:45.163 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:45.163 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:45.163 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:45.163 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:45.163 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:45.163 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:45.163 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:45.163 { 00:08:45.163 "subsystems": [ 00:08:45.163 { 00:08:45.163 "subsystem": "bdev", 00:08:45.163 "config": [ 00:08:45.163 { 00:08:45.163 "params": { 00:08:45.163 "trtype": "pcie", 00:08:45.163 "traddr": "0000:00:10.0", 00:08:45.163 "name": "Nvme0" 00:08:45.163 }, 00:08:45.163 "method": "bdev_nvme_attach_controller" 00:08:45.163 }, 00:08:45.163 { 00:08:45.163 "params": { 00:08:45.163 "trtype": "pcie", 00:08:45.163 "traddr": "0000:00:11.0", 00:08:45.163 "name": "Nvme1" 00:08:45.163 }, 00:08:45.163 "method": "bdev_nvme_attach_controller" 00:08:45.163 }, 00:08:45.163 { 00:08:45.163 "method": "bdev_wait_for_examine" 00:08:45.163 } 00:08:45.163 ] 00:08:45.163 } 00:08:45.163 ] 00:08:45.163 } 00:08:45.163 [2024-07-23 22:10:17.243476] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:45.163 [2024-07-23 22:10:17.244072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77222 ] 00:08:45.422 [2024-07-23 22:10:17.370203] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:45.422 [2024-07-23 22:10:17.388445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.422 [2024-07-23 22:10:17.436879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.422 [2024-07-23 22:10:17.478542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:45.938  Copying: 65/65 [MB] (average 747 MBps) 00:08:45.938 00:08:45.938 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:45.938 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:45.938 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:45.938 22:10:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 [2024-07-23 22:10:18.009342] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:45.938 [2024-07-23 22:10:18.009453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77237 ] 00:08:45.938 { 00:08:45.938 "subsystems": [ 00:08:45.938 { 00:08:45.938 "subsystem": "bdev", 00:08:45.938 "config": [ 00:08:45.938 { 00:08:45.938 "params": { 00:08:45.938 "trtype": "pcie", 00:08:45.938 "traddr": "0000:00:10.0", 00:08:45.938 "name": "Nvme0" 00:08:45.938 }, 00:08:45.938 "method": "bdev_nvme_attach_controller" 00:08:45.938 }, 00:08:45.938 { 00:08:45.938 "params": { 00:08:45.938 "trtype": "pcie", 00:08:45.939 "traddr": "0000:00:11.0", 00:08:45.939 "name": "Nvme1" 00:08:45.939 }, 00:08:45.939 "method": "bdev_nvme_attach_controller" 00:08:45.939 }, 00:08:45.939 { 00:08:45.939 "method": "bdev_wait_for_examine" 00:08:45.939 } 00:08:45.939 ] 00:08:45.939 } 00:08:45.939 ] 00:08:45.939 } 00:08:46.197 [2024-07-23 22:10:18.136033] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:46.197 [2024-07-23 22:10:18.152761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.197 [2024-07-23 22:10:18.201427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.197 [2024-07-23 22:10:18.242616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:46.455  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:46.455 00:08:46.455 22:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:46.455 22:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:46.455 22:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:46.455 22:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:46.455 22:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:46.455 22:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:46.455 22:10:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:46.455 [2024-07-23 22:10:18.621720] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:46.455 [2024-07-23 22:10:18.621817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77259 ] 00:08:46.455 { 00:08:46.455 "subsystems": [ 00:08:46.455 { 00:08:46.455 "subsystem": "bdev", 00:08:46.455 "config": [ 00:08:46.455 { 00:08:46.455 "params": { 00:08:46.455 "trtype": "pcie", 00:08:46.455 "traddr": "0000:00:10.0", 00:08:46.455 "name": "Nvme0" 00:08:46.455 }, 00:08:46.455 "method": "bdev_nvme_attach_controller" 00:08:46.455 }, 00:08:46.455 { 00:08:46.455 "params": { 00:08:46.455 "trtype": "pcie", 00:08:46.455 "traddr": "0000:00:11.0", 00:08:46.455 "name": "Nvme1" 00:08:46.455 }, 00:08:46.455 "method": "bdev_nvme_attach_controller" 00:08:46.455 }, 00:08:46.455 { 00:08:46.455 "method": "bdev_wait_for_examine" 00:08:46.455 } 00:08:46.455 ] 00:08:46.455 } 00:08:46.455 ] 00:08:46.455 } 00:08:46.713 [2024-07-23 22:10:18.739204] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:46.713 [2024-07-23 22:10:18.757956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.713 [2024-07-23 22:10:18.807094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.713 [2024-07-23 22:10:18.848750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:47.229  Copying: 65/65 [MB] (average 833 MBps) 00:08:47.229 00:08:47.229 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:47.229 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:47.229 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:47.229 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:47.229 [2024-07-23 22:10:19.340274] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:47.229 [2024-07-23 22:10:19.340349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77273 ] 00:08:47.229 { 00:08:47.229 "subsystems": [ 00:08:47.229 { 00:08:47.229 "subsystem": "bdev", 00:08:47.229 "config": [ 00:08:47.229 { 00:08:47.229 "params": { 00:08:47.229 "trtype": "pcie", 00:08:47.229 "traddr": "0000:00:10.0", 00:08:47.229 "name": "Nvme0" 00:08:47.229 }, 00:08:47.229 "method": "bdev_nvme_attach_controller" 00:08:47.229 }, 00:08:47.229 { 00:08:47.229 "params": { 00:08:47.229 "trtype": "pcie", 00:08:47.229 "traddr": "0000:00:11.0", 00:08:47.229 "name": "Nvme1" 00:08:47.229 }, 00:08:47.229 "method": "bdev_nvme_attach_controller" 00:08:47.229 }, 00:08:47.229 { 00:08:47.229 "method": "bdev_wait_for_examine" 00:08:47.229 } 00:08:47.229 ] 00:08:47.229 } 00:08:47.229 ] 00:08:47.229 } 00:08:47.487 [2024-07-23 22:10:19.457389] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:47.487 [2024-07-23 22:10:19.473191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.487 [2024-07-23 22:10:19.522082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.487 [2024-07-23 22:10:19.563654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:47.745  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:47.745 00:08:47.745 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:47.745 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:47.745 00:08:47.745 real 0m2.710s 00:08:47.746 user 0m1.937s 00:08:47.746 sys 0m0.799s 00:08:47.746 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.746 ************************************ 00:08:47.746 END TEST dd_offset_magic 00:08:47.746 ************************************ 00:08:47.746 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:48.004 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:48.004 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:48.004 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:48.004 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:48.004 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:48.004 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:48.004 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:48.004 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:48.004 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:48.004 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:48.004 22:10:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:48.004 [2024-07-23 22:10:20.013929] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:48.004 [2024-07-23 22:10:20.014033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77305 ] 00:08:48.004 { 00:08:48.004 "subsystems": [ 00:08:48.004 { 00:08:48.004 "subsystem": "bdev", 00:08:48.004 "config": [ 00:08:48.004 { 00:08:48.004 "params": { 00:08:48.004 "trtype": "pcie", 00:08:48.004 "traddr": "0000:00:10.0", 00:08:48.004 "name": "Nvme0" 00:08:48.004 }, 00:08:48.004 "method": "bdev_nvme_attach_controller" 00:08:48.004 }, 00:08:48.004 { 00:08:48.004 "params": { 00:08:48.004 "trtype": "pcie", 00:08:48.004 "traddr": "0000:00:11.0", 00:08:48.004 "name": "Nvme1" 00:08:48.004 }, 00:08:48.004 "method": "bdev_nvme_attach_controller" 00:08:48.004 }, 00:08:48.004 { 00:08:48.004 "method": "bdev_wait_for_examine" 00:08:48.004 } 00:08:48.004 ] 00:08:48.004 } 00:08:48.004 ] 00:08:48.004 } 00:08:48.004 [2024-07-23 22:10:20.140184] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.004 [2024-07-23 22:10:20.156633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.262 [2024-07-23 22:10:20.205363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.262 [2024-07-23 22:10:20.247025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:48.520  Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:48.520 00:08:48.520 22:10:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:48.520 22:10:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:48.520 22:10:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:48.520 22:10:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:48.520 22:10:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:48.520 22:10:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:48.520 22:10:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:48.520 22:10:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:48.520 22:10:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:48.520 22:10:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:48.520 [2024-07-23 22:10:20.628147] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:48.520 [2024-07-23 22:10:20.628249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77326 ] 00:08:48.520 { 00:08:48.520 "subsystems": [ 00:08:48.520 { 00:08:48.520 "subsystem": "bdev", 00:08:48.520 "config": [ 00:08:48.520 { 00:08:48.520 "params": { 00:08:48.520 "trtype": "pcie", 00:08:48.520 "traddr": "0000:00:10.0", 00:08:48.520 "name": "Nvme0" 00:08:48.520 }, 00:08:48.520 "method": "bdev_nvme_attach_controller" 00:08:48.520 }, 00:08:48.520 { 00:08:48.520 "params": { 00:08:48.520 "trtype": "pcie", 00:08:48.520 "traddr": "0000:00:11.0", 00:08:48.521 "name": "Nvme1" 00:08:48.521 }, 00:08:48.521 "method": "bdev_nvme_attach_controller" 00:08:48.521 }, 00:08:48.521 { 00:08:48.521 "method": "bdev_wait_for_examine" 00:08:48.521 } 00:08:48.521 ] 00:08:48.521 } 00:08:48.521 ] 00:08:48.521 } 00:08:48.778 [2024-07-23 22:10:20.745376] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.778 [2024-07-23 22:10:20.761010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.778 [2024-07-23 22:10:20.810104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.778 [2024-07-23 22:10:20.851292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:49.036  Copying: 5120/5120 [kB] (average 833 MBps) 00:08:49.036 00:08:49.036 22:10:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:49.036 ************************************ 00:08:49.036 END TEST spdk_dd_bdev_to_bdev 00:08:49.036 ************************************ 00:08:49.036 00:08:49.036 real 0m6.502s 00:08:49.036 user 0m4.636s 00:08:49.036 sys 0m3.130s 00:08:49.036 22:10:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.036 22:10:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:49.294 22:10:21 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:49.295 22:10:21 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:49.295 22:10:21 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:49.295 22:10:21 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.295 22:10:21 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:49.295 ************************************ 00:08:49.295 START TEST spdk_dd_uring 00:08:49.295 ************************************ 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:49.295 * Looking for test storage... 00:08:49.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:49.295 ************************************ 00:08:49.295 START TEST dd_uring_copy 00:08:49.295 ************************************ 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=okxjp3p2224xhvvmaflqj75fkiq7d46h3slf42sp2rystn80dlu097fj31o6ngc09leq0erhqah3ilopr3v5zsi7ts037e842pei42csxypofo531jvt6tm1s9cb36z8ez2r1447j8yzf2l1p5eoukhqtdevbfgoq3yvm65mhfls9ba08vwbz3tyy25nsqhorab3gfakrepr5zlglbtbwdn6mmhzi55kl4j1ggm21n6hmheoh5fxehu80yq13od8oxcznzk9jfl6o343x4bha06knouynerbqbzoot6qjp18dozfblh8w2z7l1g0e3ahauwjd7138tspq9e7z5x76sjy26vtjzkwv7th578f9qegeh1fplh4byeotmiaee07org6bv4ulx0wizxp0rta8qogqdgws32ttfddvkw8lrnebu38i55ubmxcljswweo5kwg4klt0aj7phbkaim62o1ugruehmyvehkch0tuerc7nzo583wfochg08yk2xqp0yx1r2fgkllfxf8ep8yt1in7yfxiyx4pmhs8r5y12nsoo5u9njcpynmdsp9jqnu4mqpxnitonfnmzks9e9l4g55p93eazvvwjfpr00kllojp46uw8zrq35lks0295as7zqnx7qwayntdg7y6vjadlwqvvf2k38wjk31rtikqkddjixy6felr0kbnk1jp4cs9yubjg7i770jfpqjmlirltcpu2dvoiyahg6qglp7gj6yytg00vjzuvtbhc7ovhmwgkp3258th05qqt44idpdx5twbwx0onu4sgq6bzst00ou4uvg3a1z5xvmc8b6wj99cuy8lup5cfx0i0pxv39td7jqd85h74d5kqku43i47evuvjwvh3c0ubg8qux6c2lluir0r4vve3jv93klxj5acmh5xfk25uirqqbgbqj855l4ksmyerkvau0pdjj5x76nw7ao56zfgf1k4uago5tqcyqgb0nl6290krcrmm0geyie9sfj1j7vz3xvrnvu5cutee 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo okxjp3p2224xhvvmaflqj75fkiq7d46h3slf42sp2rystn80dlu097fj31o6ngc09leq0erhqah3ilopr3v5zsi7ts037e842pei42csxypofo531jvt6tm1s9cb36z8ez2r1447j8yzf2l1p5eoukhqtdevbfgoq3yvm65mhfls9ba08vwbz3tyy25nsqhorab3gfakrepr5zlglbtbwdn6mmhzi55kl4j1ggm21n6hmheoh5fxehu80yq13od8oxcznzk9jfl6o343x4bha06knouynerbqbzoot6qjp18dozfblh8w2z7l1g0e3ahauwjd7138tspq9e7z5x76sjy26vtjzkwv7th578f9qegeh1fplh4byeotmiaee07org6bv4ulx0wizxp0rta8qogqdgws32ttfddvkw8lrnebu38i55ubmxcljswweo5kwg4klt0aj7phbkaim62o1ugruehmyvehkch0tuerc7nzo583wfochg08yk2xqp0yx1r2fgkllfxf8ep8yt1in7yfxiyx4pmhs8r5y12nsoo5u9njcpynmdsp9jqnu4mqpxnitonfnmzks9e9l4g55p93eazvvwjfpr00kllojp46uw8zrq35lks0295as7zqnx7qwayntdg7y6vjadlwqvvf2k38wjk31rtikqkddjixy6felr0kbnk1jp4cs9yubjg7i770jfpqjmlirltcpu2dvoiyahg6qglp7gj6yytg00vjzuvtbhc7ovhmwgkp3258th05qqt44idpdx5twbwx0onu4sgq6bzst00ou4uvg3a1z5xvmc8b6wj99cuy8lup5cfx0i0pxv39td7jqd85h74d5kqku43i47evuvjwvh3c0ubg8qux6c2lluir0r4vve3jv93klxj5acmh5xfk25uirqqbgbqj855l4ksmyerkvau0pdjj5x76nw7ao56zfgf1k4uago5tqcyqgb0nl6290krcrmm0geyie9sfj1j7vz3xvrnvu5cutee 00:08:49.295 22:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:49.295 [2024-07-23 22:10:21.467589] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:49.295 [2024-07-23 22:10:21.467951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77390 ] 00:08:49.552 [2024-07-23 22:10:21.594165] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:49.552 [2024-07-23 22:10:21.609295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.552 [2024-07-23 22:10:21.658333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.552 [2024-07-23 22:10:21.699310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:50.682  Copying: 511/511 [MB] (average 1179 MBps) 00:08:50.682 00:08:50.682 22:10:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:50.682 22:10:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:50.682 22:10:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:50.682 22:10:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:50.682 { 00:08:50.682 "subsystems": [ 00:08:50.682 { 00:08:50.682 "subsystem": "bdev", 00:08:50.682 "config": [ 00:08:50.682 { 00:08:50.682 "params": { 00:08:50.682 "block_size": 512, 00:08:50.682 "num_blocks": 1048576, 00:08:50.682 "name": "malloc0" 00:08:50.682 }, 00:08:50.682 "method": "bdev_malloc_create" 00:08:50.682 }, 00:08:50.682 { 00:08:50.682 "params": { 00:08:50.682 "filename": "/dev/zram1", 00:08:50.682 "name": "uring0" 00:08:50.682 }, 00:08:50.682 "method": "bdev_uring_create" 00:08:50.682 }, 00:08:50.682 { 00:08:50.682 "method": "bdev_wait_for_examine" 00:08:50.682 } 00:08:50.682 ] 00:08:50.682 } 00:08:50.682 ] 00:08:50.682 } 00:08:50.682 [2024-07-23 22:10:22.690165] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:50.682 [2024-07-23 22:10:22.690261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77412 ] 00:08:50.682 [2024-07-23 22:10:22.816448] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:50.683 [2024-07-23 22:10:22.831557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.940 [2024-07-23 22:10:22.880290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.940 [2024-07-23 22:10:22.921590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:53.139  Copying: 263/512 [MB] (263 MBps) Copying: 512/512 [MB] (average 265 MBps) 00:08:53.139 00:08:53.139 22:10:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:53.139 22:10:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:53.139 22:10:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:53.139 22:10:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:53.397 [2024-07-23 22:10:25.383809] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:53.397 [2024-07-23 22:10:25.383946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77445 ] 00:08:53.397 { 00:08:53.397 "subsystems": [ 00:08:53.397 { 00:08:53.397 "subsystem": "bdev", 00:08:53.397 "config": [ 00:08:53.397 { 00:08:53.397 "params": { 00:08:53.397 "block_size": 512, 00:08:53.397 "num_blocks": 1048576, 00:08:53.397 "name": "malloc0" 00:08:53.397 }, 00:08:53.397 "method": "bdev_malloc_create" 00:08:53.397 }, 00:08:53.397 { 00:08:53.397 "params": { 00:08:53.397 "filename": "/dev/zram1", 00:08:53.397 "name": "uring0" 00:08:53.397 }, 00:08:53.397 "method": "bdev_uring_create" 00:08:53.397 }, 00:08:53.397 { 00:08:53.397 "method": "bdev_wait_for_examine" 00:08:53.397 } 00:08:53.397 ] 00:08:53.397 } 00:08:53.397 ] 00:08:53.397 } 00:08:53.397 [2024-07-23 22:10:25.510531] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:53.397 [2024-07-23 22:10:25.525808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.397 [2024-07-23 22:10:25.574948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.655 [2024-07-23 22:10:25.616472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:56.483  Copying: 210/512 [MB] (210 MBps) Copying: 405/512 [MB] (194 MBps) Copying: 512/512 [MB] (average 198 MBps) 00:08:56.483 00:08:56.483 22:10:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:56.483 22:10:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ okxjp3p2224xhvvmaflqj75fkiq7d46h3slf42sp2rystn80dlu097fj31o6ngc09leq0erhqah3ilopr3v5zsi7ts037e842pei42csxypofo531jvt6tm1s9cb36z8ez2r1447j8yzf2l1p5eoukhqtdevbfgoq3yvm65mhfls9ba08vwbz3tyy25nsqhorab3gfakrepr5zlglbtbwdn6mmhzi55kl4j1ggm21n6hmheoh5fxehu80yq13od8oxcznzk9jfl6o343x4bha06knouynerbqbzoot6qjp18dozfblh8w2z7l1g0e3ahauwjd7138tspq9e7z5x76sjy26vtjzkwv7th578f9qegeh1fplh4byeotmiaee07org6bv4ulx0wizxp0rta8qogqdgws32ttfddvkw8lrnebu38i55ubmxcljswweo5kwg4klt0aj7phbkaim62o1ugruehmyvehkch0tuerc7nzo583wfochg08yk2xqp0yx1r2fgkllfxf8ep8yt1in7yfxiyx4pmhs8r5y12nsoo5u9njcpynmdsp9jqnu4mqpxnitonfnmzks9e9l4g55p93eazvvwjfpr00kllojp46uw8zrq35lks0295as7zqnx7qwayntdg7y6vjadlwqvvf2k38wjk31rtikqkddjixy6felr0kbnk1jp4cs9yubjg7i770jfpqjmlirltcpu2dvoiyahg6qglp7gj6yytg00vjzuvtbhc7ovhmwgkp3258th05qqt44idpdx5twbwx0onu4sgq6bzst00ou4uvg3a1z5xvmc8b6wj99cuy8lup5cfx0i0pxv39td7jqd85h74d5kqku43i47evuvjwvh3c0ubg8qux6c2lluir0r4vve3jv93klxj5acmh5xfk25uirqqbgbqj855l4ksmyerkvau0pdjj5x76nw7ao56zfgf1k4uago5tqcyqgb0nl6290krcrmm0geyie9sfj1j7vz3xvrnvu5cutee == \o\k\x\j\p\3\p\2\2\2\4\x\h\v\v\m\a\f\l\q\j\7\5\f\k\i\q\7\d\4\6\h\3\s\l\f\4\2\s\p\2\r\y\s\t\n\8\0\d\l\u\0\9\7\f\j\3\1\o\6\n\g\c\0\9\l\e\q\0\e\r\h\q\a\h\3\i\l\o\p\r\3\v\5\z\s\i\7\t\s\0\3\7\e\8\4\2\p\e\i\4\2\c\s\x\y\p\o\f\o\5\3\1\j\v\t\6\t\m\1\s\9\c\b\3\6\z\8\e\z\2\r\1\4\4\7\j\8\y\z\f\2\l\1\p\5\e\o\u\k\h\q\t\d\e\v\b\f\g\o\q\3\y\v\m\6\5\m\h\f\l\s\9\b\a\0\8\v\w\b\z\3\t\y\y\2\5\n\s\q\h\o\r\a\b\3\g\f\a\k\r\e\p\r\5\z\l\g\l\b\t\b\w\d\n\6\m\m\h\z\i\5\5\k\l\4\j\1\g\g\m\2\1\n\6\h\m\h\e\o\h\5\f\x\e\h\u\8\0\y\q\1\3\o\d\8\o\x\c\z\n\z\k\9\j\f\l\6\o\3\4\3\x\4\b\h\a\0\6\k\n\o\u\y\n\e\r\b\q\b\z\o\o\t\6\q\j\p\1\8\d\o\z\f\b\l\h\8\w\2\z\7\l\1\g\0\e\3\a\h\a\u\w\j\d\7\1\3\8\t\s\p\q\9\e\7\z\5\x\7\6\s\j\y\2\6\v\t\j\z\k\w\v\7\t\h\5\7\8\f\9\q\e\g\e\h\1\f\p\l\h\4\b\y\e\o\t\m\i\a\e\e\0\7\o\r\g\6\b\v\4\u\l\x\0\w\i\z\x\p\0\r\t\a\8\q\o\g\q\d\g\w\s\3\2\t\t\f\d\d\v\k\w\8\l\r\n\e\b\u\3\8\i\5\5\u\b\m\x\c\l\j\s\w\w\e\o\5\k\w\g\4\k\l\t\0\a\j\7\p\h\b\k\a\i\m\6\2\o\1\u\g\r\u\e\h\m\y\v\e\h\k\c\h\0\t\u\e\r\c\7\n\z\o\5\8\3\w\f\o\c\h\g\0\8\y\k\2\x\q\p\0\y\x\1\r\2\f\g\k\l\l\f\x\f\8\e\p\8\y\t\1\i\n\7\y\f\x\i\y\x\4\p\m\h\s\8\r\5\y\1\2\n\s\o\o\5\u\9\n\j\c\p\y\n\m\d\s\p\9\j\q\n\u\4\m\q\p\x\n\i\t\o\n\f\n\m\z\k\s\9\e\9\l\4\g\5\5\p\9\3\e\a\z\v\v\w\j\f\p\r\0\0\k\l\l\o\j\p\4\6\u\w\8\z\r\q\3\5\l\k\s\0\2\9\5\a\s\7\z\q\n\x\7\q\w\a\y\n\t\d\g\7\y\6\v\j\a\d\l\w\q\v\v\f\2\k\3\8\w\j\k\3\1\r\t\i\k\q\k\d\d\j\i\x\y\6\f\e\l\r\0\k\b\n\k\1\j\p\4\c\s\9\y\u\b\j\g\7\i\7\7\0\j\f\p\q\j\m\l\i\r\l\t\c\p\u\2\d\v\o\i\y\a\h\g\6\q\g\l\p\7\g\j\6\y\y\t\g\0\0\v\j\z\u\v\t\b\h\c\7\o\v\h\m\w\g\k\p\3\2\5\8\t\h\0\5\q\q\t\4\4\i\d\p\d\x\5\t\w\b\w\x\0\o\n\u\4\s\g\q\6\b\z\s\t\0\0\o\u\4\u\v\g\3\a\1\z\5\x\v\m\c\8\b\6\w\j\9\9\c\u\y\8\l\u\p\5\c\f\x\0\i\0\p\x\v\3\9\t\d\7\j\q\d\8\5\h\7\4\d\5\k\q\k\u\4\3\i\4\7\e\v\u\v\j\w\v\h\3\c\0\u\b\g\8\q\u\x\6\c\2\l\l\u\i\r\0\r\4\v\v\e\3\j\v\9\3\k\l\x\j\5\a\c\m\h\5\x\f\k\2\5\u\i\r\q\q\b\g\b\q\j\8\5\5\l\4\k\s\m\y\e\r\k\v\a\u\0\p\d\j\j\5\x\7\6\n\w\7\a\o\5\6\z\f\g\f\1\k\4\u\a\g\o\5\t\q\c\y\q\g\b\0\n\l\6\2\9\0\k\r\c\r\m\m\0\g\e\y\i\e\9\s\f\j\1\j\7\v\z\3\x\v\r\n\v\u\5\c\u\t\e\e ]] 00:08:56.483 22:10:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:56.483 22:10:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ okxjp3p2224xhvvmaflqj75fkiq7d46h3slf42sp2rystn80dlu097fj31o6ngc09leq0erhqah3ilopr3v5zsi7ts037e842pei42csxypofo531jvt6tm1s9cb36z8ez2r1447j8yzf2l1p5eoukhqtdevbfgoq3yvm65mhfls9ba08vwbz3tyy25nsqhorab3gfakrepr5zlglbtbwdn6mmhzi55kl4j1ggm21n6hmheoh5fxehu80yq13od8oxcznzk9jfl6o343x4bha06knouynerbqbzoot6qjp18dozfblh8w2z7l1g0e3ahauwjd7138tspq9e7z5x76sjy26vtjzkwv7th578f9qegeh1fplh4byeotmiaee07org6bv4ulx0wizxp0rta8qogqdgws32ttfddvkw8lrnebu38i55ubmxcljswweo5kwg4klt0aj7phbkaim62o1ugruehmyvehkch0tuerc7nzo583wfochg08yk2xqp0yx1r2fgkllfxf8ep8yt1in7yfxiyx4pmhs8r5y12nsoo5u9njcpynmdsp9jqnu4mqpxnitonfnmzks9e9l4g55p93eazvvwjfpr00kllojp46uw8zrq35lks0295as7zqnx7qwayntdg7y6vjadlwqvvf2k38wjk31rtikqkddjixy6felr0kbnk1jp4cs9yubjg7i770jfpqjmlirltcpu2dvoiyahg6qglp7gj6yytg00vjzuvtbhc7ovhmwgkp3258th05qqt44idpdx5twbwx0onu4sgq6bzst00ou4uvg3a1z5xvmc8b6wj99cuy8lup5cfx0i0pxv39td7jqd85h74d5kqku43i47evuvjwvh3c0ubg8qux6c2lluir0r4vve3jv93klxj5acmh5xfk25uirqqbgbqj855l4ksmyerkvau0pdjj5x76nw7ao56zfgf1k4uago5tqcyqgb0nl6290krcrmm0geyie9sfj1j7vz3xvrnvu5cutee == \o\k\x\j\p\3\p\2\2\2\4\x\h\v\v\m\a\f\l\q\j\7\5\f\k\i\q\7\d\4\6\h\3\s\l\f\4\2\s\p\2\r\y\s\t\n\8\0\d\l\u\0\9\7\f\j\3\1\o\6\n\g\c\0\9\l\e\q\0\e\r\h\q\a\h\3\i\l\o\p\r\3\v\5\z\s\i\7\t\s\0\3\7\e\8\4\2\p\e\i\4\2\c\s\x\y\p\o\f\o\5\3\1\j\v\t\6\t\m\1\s\9\c\b\3\6\z\8\e\z\2\r\1\4\4\7\j\8\y\z\f\2\l\1\p\5\e\o\u\k\h\q\t\d\e\v\b\f\g\o\q\3\y\v\m\6\5\m\h\f\l\s\9\b\a\0\8\v\w\b\z\3\t\y\y\2\5\n\s\q\h\o\r\a\b\3\g\f\a\k\r\e\p\r\5\z\l\g\l\b\t\b\w\d\n\6\m\m\h\z\i\5\5\k\l\4\j\1\g\g\m\2\1\n\6\h\m\h\e\o\h\5\f\x\e\h\u\8\0\y\q\1\3\o\d\8\o\x\c\z\n\z\k\9\j\f\l\6\o\3\4\3\x\4\b\h\a\0\6\k\n\o\u\y\n\e\r\b\q\b\z\o\o\t\6\q\j\p\1\8\d\o\z\f\b\l\h\8\w\2\z\7\l\1\g\0\e\3\a\h\a\u\w\j\d\7\1\3\8\t\s\p\q\9\e\7\z\5\x\7\6\s\j\y\2\6\v\t\j\z\k\w\v\7\t\h\5\7\8\f\9\q\e\g\e\h\1\f\p\l\h\4\b\y\e\o\t\m\i\a\e\e\0\7\o\r\g\6\b\v\4\u\l\x\0\w\i\z\x\p\0\r\t\a\8\q\o\g\q\d\g\w\s\3\2\t\t\f\d\d\v\k\w\8\l\r\n\e\b\u\3\8\i\5\5\u\b\m\x\c\l\j\s\w\w\e\o\5\k\w\g\4\k\l\t\0\a\j\7\p\h\b\k\a\i\m\6\2\o\1\u\g\r\u\e\h\m\y\v\e\h\k\c\h\0\t\u\e\r\c\7\n\z\o\5\8\3\w\f\o\c\h\g\0\8\y\k\2\x\q\p\0\y\x\1\r\2\f\g\k\l\l\f\x\f\8\e\p\8\y\t\1\i\n\7\y\f\x\i\y\x\4\p\m\h\s\8\r\5\y\1\2\n\s\o\o\5\u\9\n\j\c\p\y\n\m\d\s\p\9\j\q\n\u\4\m\q\p\x\n\i\t\o\n\f\n\m\z\k\s\9\e\9\l\4\g\5\5\p\9\3\e\a\z\v\v\w\j\f\p\r\0\0\k\l\l\o\j\p\4\6\u\w\8\z\r\q\3\5\l\k\s\0\2\9\5\a\s\7\z\q\n\x\7\q\w\a\y\n\t\d\g\7\y\6\v\j\a\d\l\w\q\v\v\f\2\k\3\8\w\j\k\3\1\r\t\i\k\q\k\d\d\j\i\x\y\6\f\e\l\r\0\k\b\n\k\1\j\p\4\c\s\9\y\u\b\j\g\7\i\7\7\0\j\f\p\q\j\m\l\i\r\l\t\c\p\u\2\d\v\o\i\y\a\h\g\6\q\g\l\p\7\g\j\6\y\y\t\g\0\0\v\j\z\u\v\t\b\h\c\7\o\v\h\m\w\g\k\p\3\2\5\8\t\h\0\5\q\q\t\4\4\i\d\p\d\x\5\t\w\b\w\x\0\o\n\u\4\s\g\q\6\b\z\s\t\0\0\o\u\4\u\v\g\3\a\1\z\5\x\v\m\c\8\b\6\w\j\9\9\c\u\y\8\l\u\p\5\c\f\x\0\i\0\p\x\v\3\9\t\d\7\j\q\d\8\5\h\7\4\d\5\k\q\k\u\4\3\i\4\7\e\v\u\v\j\w\v\h\3\c\0\u\b\g\8\q\u\x\6\c\2\l\l\u\i\r\0\r\4\v\v\e\3\j\v\9\3\k\l\x\j\5\a\c\m\h\5\x\f\k\2\5\u\i\r\q\q\b\g\b\q\j\8\5\5\l\4\k\s\m\y\e\r\k\v\a\u\0\p\d\j\j\5\x\7\6\n\w\7\a\o\5\6\z\f\g\f\1\k\4\u\a\g\o\5\t\q\c\y\q\g\b\0\n\l\6\2\9\0\k\r\c\r\m\m\0\g\e\y\i\e\9\s\f\j\1\j\7\v\z\3\x\v\r\n\v\u\5\c\u\t\e\e ]] 00:08:56.483 22:10:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:57.071 22:10:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:57.072 22:10:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:57.072 22:10:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:57.072 22:10:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:57.072 [2024-07-23 22:10:29.097547] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:08:57.072 [2024-07-23 22:10:29.097629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77509 ] 00:08:57.072 { 00:08:57.072 "subsystems": [ 00:08:57.072 { 00:08:57.072 "subsystem": "bdev", 00:08:57.072 "config": [ 00:08:57.072 { 00:08:57.072 "params": { 00:08:57.072 "block_size": 512, 00:08:57.072 "num_blocks": 1048576, 00:08:57.072 "name": "malloc0" 00:08:57.072 }, 00:08:57.072 "method": "bdev_malloc_create" 00:08:57.072 }, 00:08:57.072 { 00:08:57.072 "params": { 00:08:57.072 "filename": "/dev/zram1", 00:08:57.072 "name": "uring0" 00:08:57.072 }, 00:08:57.072 "method": "bdev_uring_create" 00:08:57.072 }, 00:08:57.072 { 00:08:57.072 "method": "bdev_wait_for_examine" 00:08:57.072 } 00:08:57.072 ] 00:08:57.072 } 00:08:57.072 ] 00:08:57.072 } 00:08:57.072 [2024-07-23 22:10:29.215249] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:57.072 [2024-07-23 22:10:29.233412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.334 [2024-07-23 22:10:29.281985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.334 [2024-07-23 22:10:29.323431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:00.159  Copying: 200/512 [MB] (200 MBps) Copying: 403/512 [MB] (203 MBps) Copying: 512/512 [MB] (average 201 MBps) 00:09:00.159 00:09:00.159 22:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:00.159 22:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:00.159 22:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:00.159 22:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:00.159 22:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:00.159 22:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:00.159 22:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:00.159 22:10:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:00.417 [2024-07-23 22:10:32.370918] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:00.417 [2024-07-23 22:10:32.371212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77554 ] 00:09:00.417 { 00:09:00.417 "subsystems": [ 00:09:00.417 { 00:09:00.417 "subsystem": "bdev", 00:09:00.417 "config": [ 00:09:00.417 { 00:09:00.417 "params": { 00:09:00.417 "block_size": 512, 00:09:00.417 "num_blocks": 1048576, 00:09:00.417 "name": "malloc0" 00:09:00.417 }, 00:09:00.417 "method": "bdev_malloc_create" 00:09:00.417 }, 00:09:00.417 { 00:09:00.417 "params": { 00:09:00.417 "filename": "/dev/zram1", 00:09:00.417 "name": "uring0" 00:09:00.417 }, 00:09:00.417 "method": "bdev_uring_create" 00:09:00.417 }, 00:09:00.417 { 00:09:00.417 "params": { 00:09:00.417 "name": "uring0" 00:09:00.417 }, 00:09:00.417 "method": "bdev_uring_delete" 00:09:00.417 }, 00:09:00.417 { 00:09:00.417 "method": "bdev_wait_for_examine" 00:09:00.417 } 00:09:00.417 ] 00:09:00.417 } 00:09:00.417 ] 00:09:00.417 } 00:09:00.417 [2024-07-23 22:10:32.488273] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:00.417 [2024-07-23 22:10:32.503321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.417 [2024-07-23 22:10:32.551888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.417 [2024-07-23 22:10:32.593290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:00.933  Copying: 0/0 [B] (average 0 Bps) 00:09:00.934 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.934 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:01.192 [2024-07-23 22:10:33.131473] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:01.192 [2024-07-23 22:10:33.131611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77583 ] 00:09:01.192 { 00:09:01.192 "subsystems": [ 00:09:01.192 { 00:09:01.192 "subsystem": "bdev", 00:09:01.192 "config": [ 00:09:01.192 { 00:09:01.192 "params": { 00:09:01.192 "block_size": 512, 00:09:01.192 "num_blocks": 1048576, 00:09:01.192 "name": "malloc0" 00:09:01.192 }, 00:09:01.192 "method": "bdev_malloc_create" 00:09:01.192 }, 00:09:01.192 { 00:09:01.192 "params": { 00:09:01.192 "filename": "/dev/zram1", 00:09:01.192 "name": "uring0" 00:09:01.192 }, 00:09:01.192 "method": "bdev_uring_create" 00:09:01.192 }, 00:09:01.192 { 00:09:01.192 "params": { 00:09:01.192 "name": "uring0" 00:09:01.192 }, 00:09:01.192 "method": "bdev_uring_delete" 00:09:01.192 }, 00:09:01.192 { 00:09:01.192 "method": "bdev_wait_for_examine" 00:09:01.192 } 00:09:01.192 ] 00:09:01.192 } 00:09:01.192 ] 00:09:01.192 } 00:09:01.192 [2024-07-23 22:10:33.259712] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:01.192 [2024-07-23 22:10:33.277160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.192 [2024-07-23 22:10:33.325794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.192 [2024-07-23 22:10:33.367123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:01.451 [2024-07-23 22:10:33.528457] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:01.451 [2024-07-23 22:10:33.528511] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:01.451 [2024-07-23 22:10:33.528522] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:09:01.451 [2024-07-23 22:10:33.528534] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:01.708 [2024-07-23 22:10:33.779930] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:01.966 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:09:01.966 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:01.966 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:09:01.966 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:09:01.966 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:09:01.966 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:01.966 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:01.966 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:01.966 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:01.966 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:01.966 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:01.966 22:10:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:02.225 ************************************ 00:09:02.225 END TEST dd_uring_copy 00:09:02.225 00:09:02.225 real 0m12.826s 00:09:02.225 user 0m8.440s 00:09:02.225 sys 0m10.975s 00:09:02.225 22:10:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.225 22:10:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:02.225 ************************************ 00:09:02.225 ************************************ 00:09:02.225 END TEST spdk_dd_uring 00:09:02.225 ************************************ 00:09:02.225 00:09:02.225 real 0m12.988s 00:09:02.225 user 0m8.499s 00:09:02.225 sys 0m11.082s 00:09:02.225 22:10:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.225 22:10:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:02.225 22:10:34 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:02.225 22:10:34 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:02.225 22:10:34 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.225 22:10:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:02.225 ************************************ 00:09:02.225 START TEST spdk_dd_sparse 00:09:02.225 ************************************ 00:09:02.225 22:10:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:02.485 * Looking for test storage... 00:09:02.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:02.485 1+0 records in 00:09:02.485 1+0 records out 00:09:02.485 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00885328 s, 474 MB/s 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:02.485 1+0 records in 00:09:02.485 1+0 records out 00:09:02.485 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00875178 s, 479 MB/s 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:02.485 1+0 records in 00:09:02.485 1+0 records out 00:09:02.485 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00705159 s, 595 MB/s 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:02.485 ************************************ 00:09:02.485 START TEST dd_sparse_file_to_file 00:09:02.485 ************************************ 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:02.485 22:10:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:02.485 [2024-07-23 22:10:34.551531] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:02.485 [2024-07-23 22:10:34.551918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77677 ] 00:09:02.485 { 00:09:02.485 "subsystems": [ 00:09:02.485 { 00:09:02.485 "subsystem": "bdev", 00:09:02.485 "config": [ 00:09:02.485 { 00:09:02.485 "params": { 00:09:02.485 "block_size": 4096, 00:09:02.485 "filename": "dd_sparse_aio_disk", 00:09:02.485 "name": "dd_aio" 00:09:02.485 }, 00:09:02.485 "method": "bdev_aio_create" 00:09:02.485 }, 00:09:02.485 { 00:09:02.485 "params": { 00:09:02.485 "lvs_name": "dd_lvstore", 00:09:02.485 "bdev_name": "dd_aio" 00:09:02.485 }, 00:09:02.485 "method": "bdev_lvol_create_lvstore" 00:09:02.485 }, 00:09:02.485 { 00:09:02.485 "method": "bdev_wait_for_examine" 00:09:02.485 } 00:09:02.485 ] 00:09:02.485 } 00:09:02.485 ] 00:09:02.485 } 00:09:02.744 [2024-07-23 22:10:34.678457] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:02.744 [2024-07-23 22:10:34.695993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.744 [2024-07-23 22:10:34.745046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.744 [2024-07-23 22:10:34.786543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:03.003  Copying: 12/36 [MB] (average 750 MBps) 00:09:03.003 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:03.003 00:09:03.003 real 0m0.617s 00:09:03.003 user 0m0.344s 00:09:03.003 sys 0m0.343s 00:09:03.003 ************************************ 00:09:03.003 END TEST dd_sparse_file_to_file 00:09:03.003 ************************************ 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:03.003 ************************************ 00:09:03.003 START TEST dd_sparse_file_to_bdev 00:09:03.003 ************************************ 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:03.003 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:03.262 [2024-07-23 22:10:35.225152] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:03.262 [2024-07-23 22:10:35.225794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77725 ] 00:09:03.262 { 00:09:03.262 "subsystems": [ 00:09:03.262 { 00:09:03.262 "subsystem": "bdev", 00:09:03.262 "config": [ 00:09:03.262 { 00:09:03.262 "params": { 00:09:03.262 "block_size": 4096, 00:09:03.262 "filename": "dd_sparse_aio_disk", 00:09:03.262 "name": "dd_aio" 00:09:03.262 }, 00:09:03.262 "method": "bdev_aio_create" 00:09:03.262 }, 00:09:03.262 { 00:09:03.262 "params": { 00:09:03.262 "lvs_name": "dd_lvstore", 00:09:03.262 "lvol_name": "dd_lvol", 00:09:03.262 "size_in_mib": 36, 00:09:03.262 "thin_provision": true 00:09:03.262 }, 00:09:03.262 "method": "bdev_lvol_create" 00:09:03.262 }, 00:09:03.262 { 00:09:03.262 "method": "bdev_wait_for_examine" 00:09:03.262 } 00:09:03.262 ] 00:09:03.262 } 00:09:03.262 ] 00:09:03.262 } 00:09:03.262 [2024-07-23 22:10:35.353695] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:03.262 [2024-07-23 22:10:35.373533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.262 [2024-07-23 22:10:35.422730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.520 [2024-07-23 22:10:35.464593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:03.779  Copying: 12/36 [MB] (average 413 MBps) 00:09:03.779 00:09:03.779 00:09:03.779 real 0m0.564s 00:09:03.779 user 0m0.345s 00:09:03.779 sys 0m0.300s 00:09:03.779 ************************************ 00:09:03.779 END TEST dd_sparse_file_to_bdev 00:09:03.779 ************************************ 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:03.779 ************************************ 00:09:03.779 START TEST dd_sparse_bdev_to_file 00:09:03.779 ************************************ 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:03.779 22:10:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:03.779 { 00:09:03.779 "subsystems": [ 00:09:03.779 { 00:09:03.779 "subsystem": "bdev", 00:09:03.779 "config": [ 00:09:03.779 { 00:09:03.779 "params": { 00:09:03.779 "block_size": 4096, 00:09:03.779 "filename": "dd_sparse_aio_disk", 00:09:03.779 "name": "dd_aio" 00:09:03.779 }, 00:09:03.779 "method": "bdev_aio_create" 00:09:03.779 }, 00:09:03.779 { 00:09:03.779 "method": "bdev_wait_for_examine" 00:09:03.779 } 00:09:03.779 ] 00:09:03.779 } 00:09:03.779 ] 00:09:03.779 } 00:09:03.779 [2024-07-23 22:10:35.857200] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:03.779 [2024-07-23 22:10:35.857514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77752 ] 00:09:04.037 [2024-07-23 22:10:35.984706] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:04.037 [2024-07-23 22:10:36.001581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.037 [2024-07-23 22:10:36.050776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.037 [2024-07-23 22:10:36.092345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:04.296  Copying: 12/36 [MB] (average 352 MBps) 00:09:04.296 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:04.296 ************************************ 00:09:04.296 END TEST dd_sparse_bdev_to_file 00:09:04.296 ************************************ 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:04.296 00:09:04.296 real 0m0.613s 00:09:04.296 user 0m0.367s 00:09:04.296 sys 0m0.339s 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:09:04.296 22:10:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:09:04.554 ************************************ 00:09:04.554 END TEST spdk_dd_sparse 00:09:04.554 ************************************ 00:09:04.554 00:09:04.554 real 0m2.159s 00:09:04.554 user 0m1.182s 00:09:04.554 sys 0m1.218s 00:09:04.554 22:10:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.554 22:10:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:04.554 22:10:36 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:04.554 22:10:36 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:04.554 22:10:36 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.554 22:10:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:04.554 ************************************ 00:09:04.554 START TEST spdk_dd_negative 00:09:04.554 ************************************ 00:09:04.554 22:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:04.554 * Looking for test storage... 00:09:04.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:04.554 22:10:36 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.554 22:10:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.554 22:10:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.554 22:10:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.554 22:10:36 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.554 22:10:36 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:04.555 ************************************ 00:09:04.555 START TEST dd_invalid_arguments 00:09:04.555 ************************************ 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:04.555 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:04.555 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:04.555 00:09:04.555 CPU options: 00:09:04.555 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:04.555 (like [0,1,10]) 00:09:04.555 --lcores lcore to CPU mapping list. The list is in the format: 00:09:04.555 [<,lcores[@CPUs]>...] 00:09:04.555 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:04.555 Within the group, '-' is used for range separator, 00:09:04.555 ',' is used for single number separator. 00:09:04.555 '( )' can be omitted for single element group, 00:09:04.555 '@' can be omitted if cpus and lcores have the same value 00:09:04.555 --disable-cpumask-locks Disable CPU core lock files. 00:09:04.555 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:04.555 pollers in the app support interrupt mode) 00:09:04.555 -p, --main-core main (primary) core for DPDK 00:09:04.555 00:09:04.555 Configuration options: 00:09:04.555 -c, --config, --json JSON config file 00:09:04.555 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:04.555 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:04.555 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:04.555 --rpcs-allowed comma-separated list of permitted RPCS 00:09:04.555 --json-ignore-init-errors don't exit on invalid config entry 00:09:04.555 00:09:04.555 Memory options: 00:09:04.555 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:04.555 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:04.555 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:04.555 -R, --huge-unlink unlink huge files after initialization 00:09:04.555 -n, --mem-channels number of memory channels used for DPDK 00:09:04.555 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:04.555 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:04.555 --no-huge run without using hugepages 00:09:04.555 -i, --shm-id shared memory ID (optional) 00:09:04.555 -g, --single-file-segments force creating just one hugetlbfs file 00:09:04.555 00:09:04.555 PCI options: 00:09:04.555 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:04.555 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:04.555 -u, --no-pci disable PCI access 00:09:04.555 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:04.555 00:09:04.555 Log options: 00:09:04.555 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:04.555 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:04.555 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:04.555 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:04.555 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:09:04.555 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:09:04.555 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:09:04.555 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:09:04.555 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:09:04.555 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:09:04.555 virtio_vfio_user, vmd) 00:09:04.555 --silence-noticelog disable notice level logging to stderr 00:09:04.555 00:09:04.555 Trace options: 00:09:04.555 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:04.555 setting 0 to disable trace (default 32768) 00:09:04.555 Tracepoints vary in size and can use more than one trace entry. 00:09:04.555 -e, --tpoint-group [:] 00:09:04.555 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:09:04.555 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:04.555 [2024-07-23 22:10:36.734504] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:09:04.813 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:09:04.813 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:04.813 a tracepoint group. First tpoint inside a group can be enabled by 00:09:04.813 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:04.813 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:04.813 in /include/spdk_internal/trace_defs.h 00:09:04.813 00:09:04.813 Other options: 00:09:04.813 -h, --help show this usage 00:09:04.813 -v, --version print SPDK version 00:09:04.813 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:04.813 --env-context Opaque context for use of the env implementation 00:09:04.813 00:09:04.813 Application specific: 00:09:04.813 [--------- DD Options ---------] 00:09:04.813 --if Input file. Must specify either --if or --ib. 00:09:04.813 --ib Input bdev. Must specifier either --if or --ib 00:09:04.813 --of Output file. Must specify either --of or --ob. 00:09:04.813 --ob Output bdev. Must specify either --of or --ob. 00:09:04.813 --iflag Input file flags. 00:09:04.813 --oflag Output file flags. 00:09:04.813 --bs I/O unit size (default: 4096) 00:09:04.813 --qd Queue depth (default: 2) 00:09:04.813 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:04.813 --skip Skip this many I/O units at start of input. (default: 0) 00:09:04.813 --seek Skip this many I/O units at start of output. (default: 0) 00:09:04.813 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:04.813 --sparse Enable hole skipping in input target 00:09:04.813 Available iflag and oflag values: 00:09:04.813 append - append mode 00:09:04.813 direct - use direct I/O for data 00:09:04.813 directory - fail unless a directory 00:09:04.813 dsync - use synchronized I/O for data 00:09:04.813 noatime - do not update access time 00:09:04.813 noctty - do not assign controlling terminal from file 00:09:04.813 nofollow - do not follow symlinks 00:09:04.813 nonblock - use non-blocking I/O 00:09:04.813 sync - use synchronized I/O for data and metadata 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:04.813 00:09:04.813 real 0m0.075s 00:09:04.813 user 0m0.042s 00:09:04.813 sys 0m0.031s 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.813 ************************************ 00:09:04.813 END TEST dd_invalid_arguments 00:09:04.813 ************************************ 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:04.813 ************************************ 00:09:04.813 START TEST dd_double_input 00:09:04.813 ************************************ 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:04.813 [2024-07-23 22:10:36.878372] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:04.813 00:09:04.813 real 0m0.078s 00:09:04.813 user 0m0.048s 00:09:04.813 sys 0m0.029s 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.813 ************************************ 00:09:04.813 END TEST dd_double_input 00:09:04.813 ************************************ 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:04.813 ************************************ 00:09:04.813 START TEST dd_double_output 00:09:04.813 ************************************ 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:04.813 22:10:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:05.071 [2024-07-23 22:10:37.016857] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:05.071 00:09:05.071 real 0m0.075s 00:09:05.071 user 0m0.043s 00:09:05.071 sys 0m0.031s 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:05.071 ************************************ 00:09:05.071 END TEST dd_double_output 00:09:05.071 ************************************ 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:05.071 ************************************ 00:09:05.071 START TEST dd_no_input 00:09:05.071 ************************************ 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:05.071 [2024-07-23 22:10:37.161014] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:05.071 00:09:05.071 real 0m0.081s 00:09:05.071 user 0m0.046s 00:09:05.071 sys 0m0.035s 00:09:05.071 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:05.072 ************************************ 00:09:05.072 END TEST dd_no_input 00:09:05.072 ************************************ 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:05.072 ************************************ 00:09:05.072 START TEST dd_no_output 00:09:05.072 ************************************ 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:05.072 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:05.330 [2024-07-23 22:10:37.301712] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:05.330 00:09:05.330 real 0m0.077s 00:09:05.330 user 0m0.040s 00:09:05.330 sys 0m0.037s 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:05.330 ************************************ 00:09:05.330 END TEST dd_no_output 00:09:05.330 ************************************ 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:05.330 ************************************ 00:09:05.330 START TEST dd_wrong_blocksize 00:09:05.330 ************************************ 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:05.330 [2024-07-23 22:10:37.438966] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:05.330 00:09:05.330 real 0m0.076s 00:09:05.330 user 0m0.046s 00:09:05.330 sys 0m0.030s 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.330 ************************************ 00:09:05.330 END TEST dd_wrong_blocksize 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:05.330 ************************************ 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:05.330 ************************************ 00:09:05.330 START TEST dd_smaller_blocksize 00:09:05.330 ************************************ 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.330 22:10:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.588 22:10:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.588 22:10:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.588 22:10:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.588 22:10:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.588 22:10:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:05.588 22:10:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:05.588 [2024-07-23 22:10:37.574548] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:05.588 [2024-07-23 22:10:37.574661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77976 ] 00:09:05.588 [2024-07-23 22:10:37.701132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:05.588 [2024-07-23 22:10:37.719258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.588 [2024-07-23 22:10:37.775613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.846 [2024-07-23 22:10:37.822729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:05.846 [2024-07-23 22:10:37.846503] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:05.846 [2024-07-23 22:10:37.846576] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:05.846 [2024-07-23 22:10:37.943489] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:05.846 22:10:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:09:05.846 22:10:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:05.846 22:10:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:09:05.846 22:10:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:09:05.846 22:10:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:09:05.846 22:10:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:05.846 00:09:05.846 real 0m0.510s 00:09:05.846 user 0m0.261s 00:09:05.846 sys 0m0.142s 00:09:05.846 22:10:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.846 22:10:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:05.846 ************************************ 00:09:05.846 END TEST dd_smaller_blocksize 00:09:05.846 ************************************ 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:06.104 ************************************ 00:09:06.104 START TEST dd_invalid_count 00:09:06.104 ************************************ 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:06.104 [2024-07-23 22:10:38.134357] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:06.104 00:09:06.104 real 0m0.055s 00:09:06.104 user 0m0.032s 00:09:06.104 sys 0m0.023s 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.104 ************************************ 00:09:06.104 END TEST dd_invalid_count 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:06.104 ************************************ 00:09:06.104 22:10:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:06.105 ************************************ 00:09:06.105 START TEST dd_invalid_oflag 00:09:06.105 ************************************ 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:06.105 [2024-07-23 22:10:38.272392] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:06.105 00:09:06.105 real 0m0.079s 00:09:06.105 user 0m0.049s 00:09:06.105 sys 0m0.030s 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.105 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:06.105 ************************************ 00:09:06.105 END TEST dd_invalid_oflag 00:09:06.105 ************************************ 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:06.363 ************************************ 00:09:06.363 START TEST dd_invalid_iflag 00:09:06.363 ************************************ 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:06.363 [2024-07-23 22:10:38.408499] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:06.363 00:09:06.363 real 0m0.079s 00:09:06.363 user 0m0.042s 00:09:06.363 sys 0m0.036s 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:06.363 ************************************ 00:09:06.363 END TEST dd_invalid_iflag 00:09:06.363 ************************************ 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:06.363 ************************************ 00:09:06.363 START TEST dd_unknown_flag 00:09:06.363 ************************************ 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:06.363 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:06.363 [2024-07-23 22:10:38.547637] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:06.363 [2024-07-23 22:10:38.547756] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78068 ] 00:09:06.620 [2024-07-23 22:10:38.674130] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:06.620 [2024-07-23 22:10:38.692810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.620 [2024-07-23 22:10:38.741405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.620 [2024-07-23 22:10:38.781999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.620 [2024-07-23 22:10:38.802946] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:06.620 [2024-07-23 22:10:38.802998] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.620 [2024-07-23 22:10:38.803044] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:06.620 [2024-07-23 22:10:38.803054] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.620 [2024-07-23 22:10:38.803264] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:06.620 [2024-07-23 22:10:38.803277] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.620 [2024-07-23 22:10:38.803326] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:06.620 [2024-07-23 22:10:38.803334] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:06.877 [2024-07-23 22:10:38.892442] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:06.877 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:09:06.877 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:06.877 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:09:06.877 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:09:06.877 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:09:06.878 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:06.878 00:09:06.878 real 0m0.487s 00:09:06.878 user 0m0.250s 00:09:06.878 sys 0m0.145s 00:09:06.878 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.878 22:10:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:06.878 ************************************ 00:09:06.878 END TEST dd_unknown_flag 00:09:06.878 ************************************ 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:06.878 ************************************ 00:09:06.878 START TEST dd_invalid_json 00:09:06.878 ************************************ 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:06.878 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:07.135 [2024-07-23 22:10:39.091550] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:07.135 [2024-07-23 22:10:39.091656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78097 ] 00:09:07.135 [2024-07-23 22:10:39.218048] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:07.135 [2024-07-23 22:10:39.237159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.135 [2024-07-23 22:10:39.285812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.135 [2024-07-23 22:10:39.285904] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:07.135 [2024-07-23 22:10:39.285915] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:07.135 [2024-07-23 22:10:39.285923] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:07.135 [2024-07-23 22:10:39.285953] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:07.392 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:09:07.392 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:07.392 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:09:07.392 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:09:07.392 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:09:07.392 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:07.392 00:09:07.392 real 0m0.339s 00:09:07.392 user 0m0.149s 00:09:07.392 sys 0m0.086s 00:09:07.392 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.392 22:10:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:07.392 ************************************ 00:09:07.392 END TEST dd_invalid_json 00:09:07.392 ************************************ 00:09:07.392 ************************************ 00:09:07.392 END TEST spdk_dd_negative 00:09:07.392 ************************************ 00:09:07.392 00:09:07.392 real 0m2.863s 00:09:07.392 user 0m1.318s 00:09:07.392 sys 0m1.213s 00:09:07.392 22:10:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.392 22:10:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:07.392 00:09:07.392 real 1m6.031s 00:09:07.392 user 0m40.124s 00:09:07.392 sys 0m30.195s 00:09:07.392 22:10:39 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.392 22:10:39 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:07.392 ************************************ 00:09:07.392 END TEST spdk_dd 00:09:07.392 ************************************ 00:09:07.392 22:10:39 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:09:07.392 22:10:39 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:07.392 22:10:39 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:07.392 22:10:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:07.392 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:09:07.392 22:10:39 -- spdk/autotest.sh@262 -- # '[' 1 -eq 1 ']' 00:09:07.392 22:10:39 -- spdk/autotest.sh@263 -- # run_test iscsi_tgt /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:09:07.392 22:10:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:07.392 22:10:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.392 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:09:07.392 ************************************ 00:09:07.392 START TEST iscsi_tgt 00:09:07.392 ************************************ 00:09:07.392 22:10:39 iscsi_tgt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:09:07.650 * Looking for test storage... 00:09:07.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # uname -s 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@18 -- # iscsicleanup 00:09:07.650 Cleaning up iSCSI connection 00:09:07.650 22:10:39 iscsi_tgt -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:09:07.650 22:10:39 iscsi_tgt -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:09:07.650 iscsiadm: No matching sessions found 00:09:07.650 22:10:39 iscsi_tgt -- common/autotest_common.sh@981 -- # true 00:09:07.650 22:10:39 iscsi_tgt -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:09:07.650 iscsiadm: No records found 00:09:07.650 22:10:39 iscsi_tgt -- common/autotest_common.sh@982 -- # true 00:09:07.650 22:10:39 iscsi_tgt -- common/autotest_common.sh@983 -- # rm -rf 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@21 -- # create_veth_interfaces 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # ip link set init_br nomaster 00:09:07.650 Cannot find device "init_br" 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # true 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # ip link set tgt_br nomaster 00:09:07.650 Cannot find device "tgt_br" 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # true 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # ip link set tgt_br2 nomaster 00:09:07.650 Cannot find device "tgt_br2" 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # true 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # ip link set init_br down 00:09:07.650 Cannot find device "init_br" 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # true 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # ip link set tgt_br down 00:09:07.650 Cannot find device "tgt_br" 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # true 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # ip link set tgt_br2 down 00:09:07.650 Cannot find device "tgt_br2" 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # true 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # ip link delete iscsi_br type bridge 00:09:07.650 Cannot find device "iscsi_br" 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # true 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # ip link delete spdk_init_int 00:09:07.650 Cannot find device "spdk_init_int" 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # true 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:09:07.650 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # true 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:09:07.650 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # true 00:09:07.650 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # ip netns del spdk_iscsi_ns 00:09:07.650 Cannot remove namespace file "/var/run/netns/spdk_iscsi_ns": No such file or directory 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # true 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@44 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@47 -- # ip netns add spdk_iscsi_ns 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@50 -- # ip link add spdk_init_int type veth peer name init_br 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@51 -- # ip link add spdk_tgt_int type veth peer name tgt_br 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@52 -- # ip link add spdk_tgt_int2 type veth peer name tgt_br2 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@55 -- # ip link set spdk_tgt_int netns spdk_iscsi_ns 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@56 -- # ip link set spdk_tgt_int2 netns spdk_iscsi_ns 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@59 -- # ip addr add 10.0.0.2/24 dev spdk_init_int 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@60 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.1/24 dev spdk_tgt_int 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@61 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.3/24 dev spdk_tgt_int2 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@64 -- # ip link set spdk_init_int up 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@65 -- # ip link set init_br up 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@66 -- # ip link set tgt_br up 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@67 -- # ip link set tgt_br2 up 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@68 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int up 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@69 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int2 up 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@70 -- # ip netns exec spdk_iscsi_ns ip link set lo up 00:09:07.909 22:10:39 iscsi_tgt -- iscsi_tgt/common.sh@73 -- # ip link add iscsi_br type bridge 00:09:07.909 22:10:40 iscsi_tgt -- iscsi_tgt/common.sh@74 -- # ip link set iscsi_br up 00:09:07.909 22:10:40 iscsi_tgt -- iscsi_tgt/common.sh@77 -- # ip link set init_br master iscsi_br 00:09:07.909 22:10:40 iscsi_tgt -- iscsi_tgt/common.sh@78 -- # ip link set tgt_br master iscsi_br 00:09:07.909 22:10:40 iscsi_tgt -- iscsi_tgt/common.sh@79 -- # ip link set tgt_br2 master iscsi_br 00:09:07.909 22:10:40 iscsi_tgt -- iscsi_tgt/common.sh@82 -- # iptables -I INPUT 1 -i spdk_init_int -p tcp --dport 3260 -j ACCEPT 00:09:08.167 22:10:40 iscsi_tgt -- iscsi_tgt/common.sh@83 -- # iptables -A FORWARD -i iscsi_br -o iscsi_br -j ACCEPT 00:09:08.167 22:10:40 iscsi_tgt -- iscsi_tgt/common.sh@86 -- # ping -c 1 10.0.0.1 00:09:08.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:09:08.167 00:09:08.167 --- 10.0.0.1 ping statistics --- 00:09:08.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.167 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:09:08.167 22:10:40 iscsi_tgt -- iscsi_tgt/common.sh@87 -- # ping -c 1 10.0.0.3 00:09:08.167 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:08.167 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:09:08.167 00:09:08.167 --- 10.0.0.3 ping statistics --- 00:09:08.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.167 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:08.167 22:10:40 iscsi_tgt -- iscsi_tgt/common.sh@88 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:09:08.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:09:08.167 00:09:08.167 --- 10.0.0.2 ping statistics --- 00:09:08.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.167 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:08.167 22:10:40 iscsi_tgt -- iscsi_tgt/common.sh@89 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:09:08.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:09:08.167 00:09:08.167 --- 10.0.0.2 ping statistics --- 00:09:08.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.167 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:08.167 22:10:40 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@23 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:09:08.167 22:10:40 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@25 -- # run_test iscsi_tgt_sock /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:09:08.167 22:10:40 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:08.167 22:10:40 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.167 22:10:40 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:08.167 ************************************ 00:09:08.167 START TEST iscsi_tgt_sock 00:09:08.167 ************************************ 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:09:08.167 * Looking for test storage... 00:09:08.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@48 -- # iscsitestinit 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@50 -- # HELLO_SOCK_APP='ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/examples/hello_sock' 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@51 -- # SOCAT_APP=socat 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@52 -- # OPENSSL_APP=openssl 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@53 -- # PSK='-N ssl -E 1234567890ABCDEF -I psk.spdk.io' 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@58 -- # timing_enter sock_client 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:09:08.167 Testing client path 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@59 -- # echo 'Testing client path' 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@63 -- # server_pid=78354 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@64 -- # trap 'killprocess $server_pid;iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@66 -- # waitfortcp 78354 10.0.0.2:3260 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@25 -- # local addr=10.0.0.2:3260 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@62 -- # socat tcp-l:3260,fork,bind=10.0.0.2 exec:/bin/cat 00:09:08.167 Waiting for process to start up and listen on address 10.0.0.2:3260... 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@27 -- # echo 'Waiting for process to start up and listen on address 10.0.0.2:3260...' 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@29 -- # xtrace_disable 00:09:08.167 22:10:40 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:09:08.732 [2024-07-23 22:10:40.826518] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:08.732 [2024-07-23 22:10:40.826619] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78363 ] 00:09:08.990 [2024-07-23 22:10:40.953088] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:08.990 [2024-07-23 22:10:40.972755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.990 [2024-07-23 22:10:41.037218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.990 [2024-07-23 22:10:41.037301] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:08.990 [2024-07-23 22:10:41.037332] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:09:08.990 [2024-07-23 22:10:41.037538] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 49552) 00:09:08.990 [2024-07-23 22:10:41.037610] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:09:09.925 [2024-07-23 22:10:42.037640] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:09.925 [2024-07-23 22:10:42.037818] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:10.191 [2024-07-23 22:10:42.137091] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:10.191 [2024-07-23 22:10:42.137187] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78388 ] 00:09:10.191 [2024-07-23 22:10:42.263507] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:10.191 [2024-07-23 22:10:42.280299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.191 [2024-07-23 22:10:42.340460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.191 [2024-07-23 22:10:42.340547] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:10.191 [2024-07-23 22:10:42.340568] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:09:10.191 [2024-07-23 22:10:42.340721] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 36496) 00:09:10.191 [2024-07-23 22:10:42.340773] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:09:11.562 [2024-07-23 22:10:43.340799] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:11.562 [2024-07-23 22:10:43.341027] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:11.562 [2024-07-23 22:10:43.444054] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:11.562 [2024-07-23 22:10:43.444149] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78407 ] 00:09:11.562 [2024-07-23 22:10:43.571258] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:11.562 [2024-07-23 22:10:43.589237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.562 [2024-07-23 22:10:43.648027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.562 [2024-07-23 22:10:43.648094] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:11.562 [2024-07-23 22:10:43.648115] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:09:11.562 [2024-07-23 22:10:43.648356] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 36508) 00:09:11.562 [2024-07-23 22:10:43.648414] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:09:12.492 [2024-07-23 22:10:44.648439] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:12.492 [2024-07-23 22:10:44.648603] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:12.750 killing process with pid 78354 00:09:12.750 Testing SSL server path 00:09:12.750 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:09:12.750 [2024-07-23 22:10:44.832301] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:12.750 [2024-07-23 22:10:44.832683] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78450 ] 00:09:13.008 [2024-07-23 22:10:44.959625] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:13.008 [2024-07-23 22:10:44.975198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.008 [2024-07-23 22:10:45.023916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.008 [2024-07-23 22:10:45.024257] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:13.008 [2024-07-23 22:10:45.024334] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(ssl) 00:09:13.265 [2024-07-23 22:10:45.352118] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:13.265 [2024-07-23 22:10:45.352243] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78461 ] 00:09:13.523 [2024-07-23 22:10:45.478167] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:13.523 [2024-07-23 22:10:45.500879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.523 [2024-07-23 22:10:45.567760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.523 [2024-07-23 22:10:45.568147] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:13.523 [2024-07-23 22:10:45.568319] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:09:13.523 [2024-07-23 22:10:45.570732] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 56920) to (10.0.0.1, 3260) 00:09:13.523 [2024-07-23 22:10:45.571321] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 56920) 00:09:13.523 [2024-07-23 22:10:45.572879] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:09:14.455 [2024-07-23 22:10:46.573096] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:14.455 [2024-07-23 22:10:46.573484] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:09:14.455 [2024-07-23 22:10:46.573538] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:14.713 [2024-07-23 22:10:46.680319] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:14.713 [2024-07-23 22:10:46.680417] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78478 ] 00:09:14.713 [2024-07-23 22:10:46.806290] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:14.713 [2024-07-23 22:10:46.825700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.713 [2024-07-23 22:10:46.889409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.713 [2024-07-23 22:10:46.889650] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:14.713 [2024-07-23 22:10:46.889797] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:09:14.713 [2024-07-23 22:10:46.890942] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 56934) to (10.0.0.1, 3260) 00:09:14.713 [2024-07-23 22:10:46.891744] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 56934) 00:09:14.713 [2024-07-23 22:10:46.892766] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:09:16.087 [2024-07-23 22:10:47.892925] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:16.087 [2024-07-23 22:10:47.893278] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:09:16.087 [2024-07-23 22:10:47.893331] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:16.087 [2024-07-23 22:10:47.991329] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:16.087 [2024-07-23 22:10:47.991990] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78498 ] 00:09:16.087 [2024-07-23 22:10:48.117784] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:16.087 [2024-07-23 22:10:48.134891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.087 [2024-07-23 22:10:48.184145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.087 [2024-07-23 22:10:48.184459] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:16.087 [2024-07-23 22:10:48.184589] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:09:16.087 [2024-07-23 22:10:48.185288] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 56946) to (10.0.0.1, 3260) 00:09:16.087 [2024-07-23 22:10:48.186416] posix.c: 755:posix_sock_create_ssl_context: *ERROR*: Incorrect TLS version provided: 7 00:09:16.087 [2024-07-23 22:10:48.186579] posix.c:1033:posix_sock_create: *ERROR*: posix_sock_create_ssl_context() failed, errno = 2 00:09:16.087 [2024-07-23 22:10:48.186692] hello_sock.c: 309:hello_sock_connect: *ERROR*: connect error(2): No such file or directory 00:09:16.087 [2024-07-23 22:10:48.186735] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:09:16.087 [2024-07-23 22:10:48.186807] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:16.087 [2024-07-23 22:10:48.186923] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:09:16.087 [2024-07-23 22:10:48.187011] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:16.087 [2024-07-23 22:10:48.280820] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:16.087 [2024-07-23 22:10:48.280933] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78509 ] 00:09:16.345 [2024-07-23 22:10:48.405857] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:16.345 [2024-07-23 22:10:48.425334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.346 [2024-07-23 22:10:48.486280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.346 [2024-07-23 22:10:48.486578] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:16.346 [2024-07-23 22:10:48.486745] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:09:16.346 [2024-07-23 22:10:48.487460] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 56960) to (10.0.0.1, 3260) 00:09:16.346 [2024-07-23 22:10:48.488623] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 56960) 00:09:16.346 [2024-07-23 22:10:48.489607] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:09:17.722 [2024-07-23 22:10:49.489748] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:17.722 [2024-07-23 22:10:49.490107] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:09:17.722 [2024-07-23 22:10:49.490159] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:17.722 SSL_connect:before SSL initialization 00:09:17.722 SSL_connect:SSLv3/TLS write client hello 00:09:17.722 [2024-07-23 22:10:49.626840] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 50238) to (10.0.0.1, 3260) 00:09:17.722 SSL_connect:SSLv3/TLS write client hello 00:09:17.722 SSL_connect:SSLv3/TLS read server hello 00:09:17.722 Can't use SSL_get_servername 00:09:17.722 SSL_connect:TLSv1.3 read encrypted extensions 00:09:17.722 SSL_connect:SSLv3/TLS read finished 00:09:17.722 SSL_connect:SSLv3/TLS write change cipher spec 00:09:17.722 SSL_connect:SSLv3/TLS write finished 00:09:17.722 SSL_connect:SSL negotiation finished successfully 00:09:17.722 SSL_connect:SSL negotiation finished successfully 00:09:17.722 SSL_connect:SSLv3/TLS read server session ticket 00:09:19.621 DONE 00:09:19.621 SSL3 alert write:warning:close notify 00:09:19.621 [2024-07-23 22:10:51.575091] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:09:19.621 [2024-07-23 22:10:51.610888] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:19.621 [2024-07-23 22:10:51.610987] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78554 ] 00:09:19.621 [2024-07-23 22:10:51.736673] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:19.621 [2024-07-23 22:10:51.757837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.621 [2024-07-23 22:10:51.814464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.621 [2024-07-23 22:10:51.814844] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:19.621 [2024-07-23 22:10:51.815006] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:09:19.621 [2024-07-23 22:10:51.816003] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 54756) to (10.0.0.1, 3260) 00:09:19.878 [2024-07-23 22:10:51.817954] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 54756) 00:09:19.878 [2024-07-23 22:10:51.818792] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:09:19.878 [2024-07-23 22:10:51.818812] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:09:20.812 [2024-07-23 22:10:52.818808] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:20.812 [2024-07-23 22:10:52.819214] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:20.812 [2024-07-23 22:10:52.819302] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:09:20.812 [2024-07-23 22:10:52.819449] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:20.812 [2024-07-23 22:10:52.916423] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:20.812 [2024-07-23 22:10:52.916786] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78568 ] 00:09:21.070 [2024-07-23 22:10:53.042034] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:21.070 [2024-07-23 22:10:53.060416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.070 [2024-07-23 22:10:53.109544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.070 [2024-07-23 22:10:53.109858] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:21.070 [2024-07-23 22:10:53.109970] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:09:21.070 [2024-07-23 22:10:53.110768] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 54770) to (10.0.0.1, 3260) 00:09:21.070 [2024-07-23 22:10:53.111883] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 54770) 00:09:21.070 [2024-07-23 22:10:53.112262] posix.c: 586:posix_sock_psk_find_session_server_cb: *ERROR*: Unknown Client's PSK ID 00:09:21.070 [2024-07-23 22:10:53.112303] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:09:21.070 [2024-07-23 22:10:53.112305] hello_sock.c: 240:hello_sock_writev_poll: *ERROR*: Write to socket failed. Closing connection... 00:09:21.070 [2024-07-23 22:10:53.112336] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:09:22.005 [2024-07-23 22:10:54.112335] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:22.005 [2024-07-23 22:10:54.112497] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:22.006 [2024-07-23 22:10:54.112547] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:09:22.006 [2024-07-23 22:10:54.112555] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:22.263 killing process with pid 78450 00:09:23.196 [2024-07-23 22:10:55.214980] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:23.196 [2024-07-23 22:10:55.215206] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:23.196 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:09:23.196 [2024-07-23 22:10:55.368409] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:23.196 [2024-07-23 22:10:55.368508] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78613 ] 00:09:23.454 [2024-07-23 22:10:55.494981] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:23.454 [2024-07-23 22:10:55.513756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.454 [2024-07-23 22:10:55.562561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.454 [2024-07-23 22:10:55.562650] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:09:23.454 [2024-07-23 22:10:55.562719] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(posix) 00:09:23.711 [2024-07-23 22:10:55.868126] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 39302) to (10.0.0.1, 3260) 00:09:23.711 [2024-07-23 22:10:55.868261] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:09:23.711 killing process with pid 78613 00:09:25.085 [2024-07-23 22:10:56.903136] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:09:25.085 [2024-07-23 22:10:56.903331] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:09:25.085 ************************************ 00:09:25.085 END TEST iscsi_tgt_sock 00:09:25.085 ************************************ 00:09:25.085 00:09:25.085 real 0m16.850s 00:09:25.085 user 0m18.924s 00:09:25.085 sys 0m3.229s 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:09:25.085 22:10:57 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@26 -- # [[ -d /usr/local/calsoft ]] 00:09:25.085 22:10:57 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@27 -- # run_test iscsi_tgt_calsoft /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:09:25.085 22:10:57 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.085 22:10:57 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.085 22:10:57 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:25.085 ************************************ 00:09:25.085 START TEST iscsi_tgt_calsoft 00:09:25.085 ************************************ 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:09:25.085 * Looking for test storage... 00:09:25.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@15 -- # MALLOC_BDEV_SIZE=64 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@16 -- # MALLOC_BLOCK_SIZE=512 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@18 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@19 -- # calsoft_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@22 -- # mkdir -p /usr/local/etc 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@23 -- # cp /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/its.conf /usr/local/etc/ 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@26 -- # echo IP=10.0.0.1 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@28 -- # timing_enter start_iscsi_tgt 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@30 -- # iscsitestinit 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:25.085 Process pid: 78705 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@33 -- # pid=78705 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@34 -- # echo 'Process pid: 78705' 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@36 -- # trap 'killprocess $pid; delete_tmp_conf_files; iscsitestfini; exit 1 ' SIGINT SIGTERM EXIT 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@32 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x1 --wait-for-rpc 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@38 -- # waitforlisten 78705 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@829 -- # '[' -z 78705 ']' 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.085 22:10:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:09:25.085 [2024-07-23 22:10:57.269435] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:25.085 [2024-07-23 22:10:57.270143] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78705 ] 00:09:25.343 [2024-07-23 22:10:57.405290] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:25.343 [2024-07-23 22:10:57.417890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.343 [2024-07-23 22:10:57.466895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.276 22:10:58 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.276 22:10:58 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@862 -- # return 0 00:09:26.276 22:10:58 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:09:26.276 22:10:58 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:09:26.534 [2024-07-23 22:10:58.648578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:26.792 iscsi_tgt is listening. Running tests... 00:09:26.792 22:10:58 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@41 -- # echo 'iscsi_tgt is listening. Running tests...' 00:09:26.792 22:10:58 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@43 -- # timing_exit start_iscsi_tgt 00:09:26.792 22:10:58 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.792 22:10:58 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:09:26.792 22:10:58 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_auth_group 1 -c 'user:root secret:tester' 00:09:27.050 22:10:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_discovery_auth -g 1 00:09:27.308 22:10:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:09:27.308 22:10:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:09:27.567 22:10:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create -b MyBdev 64 512 00:09:27.858 MyBdev 00:09:27.858 22:10:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -g 1 00:09:28.116 22:11:00 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@55 -- # sleep 1 00:09:29.050 22:11:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@57 -- # '[' '' ']' 00:09:29.050 22:11:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py /home/vagrant/spdk_repo/spdk/../output 00:09:29.050 [2024-07-23 22:11:01.202073] iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:09:29.050 [2024-07-23 22:11:01.202181] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:29.050 [2024-07-23 22:11:01.224509] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=ffffffff 00:09:29.308 [2024-07-23 22:11:01.287006] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:29.308 [2024-07-23 22:11:01.342146] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:09:29.308 [2024-07-23 22:11:01.363367] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:29.308 [2024-07-23 22:11:01.363478] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:29.308 [2024-07-23 22:11:01.384142] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:29.308 [2024-07-23 22:11:01.425844] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:29.308 [2024-07-23 22:11:01.425981] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:29.308 [2024-07-23 22:11:01.445400] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:29.308 [2024-07-23 22:11:01.466239] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:29.566 [2024-07-23 22:11:01.506077] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:29.566 [2024-07-23 22:11:01.547536] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:29.566 [2024-07-23 22:11:01.586307] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:09:29.566 [2024-07-23 22:11:01.606785] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:09:29.566 [2024-07-23 22:11:01.644391] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:29.566 [2024-07-23 22:11:01.644498] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:29.566 [2024-07-23 22:11:01.679885] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:09:29.566 PDU 00:09:29.566 00000000 00 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:09:29.566 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:09:29.566 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:09:29.566 [2024-07-23 22:11:01.679941] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:09:29.566 [2024-07-23 22:11:01.716252] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:29.566 [2024-07-23 22:11:01.737199] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:29.566 [2024-07-23 22:11:01.737306] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:29.824 [2024-07-23 22:11:01.781955] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:09:29.824 [2024-07-23 22:11:01.803468] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:29.824 [2024-07-23 22:11:01.820336] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:29.824 [2024-07-23 22:11:01.820431] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:29.824 [2024-07-23 22:11:01.840041] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:29.824 [2024-07-23 22:11:01.840134] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:29.824 [2024-07-23 22:11:01.893553] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:29.824 [2024-07-23 22:11:01.893696] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:29.824 [2024-07-23 22:11:01.912387] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:09:29.824 [2024-07-23 22:11:01.972969] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:29.824 [2024-07-23 22:11:02.014496] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:09:29.824 PDU 00:09:29.824 00000000 01 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:09:29.824 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:09:29.824 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:09:29.824 [2024-07-23 22:11:02.014561] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:09:30.081 [2024-07-23 22:11:02.072854] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:30.081 [2024-07-23 22:11:02.073134] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:30.081 [2024-07-23 22:11:02.092786] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:30.081 [2024-07-23 22:11:02.092898] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:30.081 [2024-07-23 22:11:02.114350] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:30.081 [2024-07-23 22:11:02.134260] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:30.081 [2024-07-23 22:11:02.134363] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:30.081 [2024-07-23 22:11:02.153837] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:09:30.081 [2024-07-23 22:11:02.196681] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:30.081 [2024-07-23 22:11:02.196928] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:30.081 [2024-07-23 22:11:02.260529] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:30.081 [2024-07-23 22:11:02.260634] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:30.339 [2024-07-23 22:11:02.279272] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(341) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:09:30.339 [2024-07-23 22:11:02.279402] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(8) ignore (ExpCmdSN=9, MaxCmdSN=71) 00:09:30.339 [2024-07-23 22:11:02.280032] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:09:30.339 [2024-07-23 22:11:02.321366] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:30.339 [2024-07-23 22:11:02.342898] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:09:30.339 [2024-07-23 22:11:02.420768] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:30.339 [2024-07-23 22:11:02.420895] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:30.339 [2024-07-23 22:11:02.442503] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:30.339 [2024-07-23 22:11:02.481130] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:30.339 [2024-07-23 22:11:02.481237] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:30.597 [2024-07-23 22:11:02.580978] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:32.495 [2024-07-23 22:11:04.541099] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:32.495 [2024-07-23 22:11:04.647681] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:32.495 [2024-07-23 22:11:04.647803] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:32.495 [2024-07-23 22:11:04.668418] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:32.495 [2024-07-23 22:11:04.668527] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:32.495 [2024-07-23 22:11:04.685943] param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 276 00:09:32.495 [2024-07-23 22:11:04.685980] iscsi.c:1303:iscsi_op_login_store_incoming_params: *ERROR*: iscsi_parse_params() failed 00:09:32.752 [2024-07-23 22:11:04.731385] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:32.752 [2024-07-23 22:11:04.751585] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:32.752 [2024-07-23 22:11:04.751675] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:32.752 [2024-07-23 22:11:04.793175] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:32.752 [2024-07-23 22:11:04.851262] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:32.752 [2024-07-23 22:11:04.851536] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:32.752 [2024-07-23 22:11:04.892128] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:32.752 [2024-07-23 22:11:04.909844] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:33.009 [2024-07-23 22:11:05.025444] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:33.009 [2024-07-23 22:11:05.025606] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:33.009 [2024-07-23 22:11:05.042242] iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 2745410467, and the dataout task tag is 2728567458 00:09:33.009 [2024-07-23 22:11:05.042351] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:09:33.009 [2024-07-23 22:11:05.042655] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:09:33.009 [2024-07-23 22:11:05.042703] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:33.009 [2024-07-23 22:11:05.059752] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:33.009 [2024-07-23 22:11:05.131954] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:33.009 [2024-07-23 22:11:05.132058] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:33.009 [2024-07-23 22:11:05.151726] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:33.009 [2024-07-23 22:11:05.151831] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:33.009 [2024-07-23 22:11:05.170440] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:33.266 [2024-07-23 22:11:05.207945] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:33.266 [2024-07-23 22:11:05.208055] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:33.266 [2024-07-23 22:11:05.228648] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(2) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:09:33.266 [2024-07-23 22:11:05.228742] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:33.266 [2024-07-23 22:11:05.228787] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:33.266 [2024-07-23 22:11:05.268369] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:33.266 [2024-07-23 22:11:05.268473] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:33.266 [2024-07-23 22:11:05.289268] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:33.266 [2024-07-23 22:11:05.310613] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:33.266 [2024-07-23 22:11:05.332298] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:33.266 [2024-07-23 22:11:05.371803] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:33.266 [2024-07-23 22:11:05.372156] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:33.266 [2024-07-23 22:11:05.408249] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:33.266 [2024-07-23 22:11:05.408353] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:33.266 [2024-07-23 22:11:05.449286] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:33.523 [2024-07-23 22:11:05.484688] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key ImmediateDataa 00:09:33.780 [2024-07-23 22:11:05.792428] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:09:33.780 [2024-07-23 22:11:05.814174] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:09:33.780 [2024-07-23 22:11:05.853774] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:33.780 [2024-07-23 22:11:05.874457] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:33.780 [2024-07-23 22:11:05.874715] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:33.780 [2024-07-23 22:11:05.893627] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:33.780 [2024-07-23 22:11:05.909193] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:33.780 [2024-07-23 22:11:05.909293] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:33.780 [2024-07-23 22:11:05.966953] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:33.781 [2024-07-23 22:11:05.967109] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:34.038 [2024-07-23 22:11:06.022254] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:09:34.038 [2024-07-23 22:11:06.057571] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:34.038 [2024-07-23 22:11:06.126433] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:34.038 [2024-07-23 22:11:06.160027] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:34.038 [2024-07-23 22:11:06.160137] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:34.296 [2024-07-23 22:11:06.236454] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:34.296 [2024-07-23 22:11:06.276028] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:34.296 [2024-07-23 22:11:06.307800] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(3) error ExpCmdSN=4 00:09:34.296 [2024-07-23 22:11:06.307951] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:34.296 [2024-07-23 22:11:06.347199] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:34.296 [2024-07-23 22:11:06.364207] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:34.296 [2024-07-23 22:11:06.364305] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:09:34.296 [2024-07-23 22:11:06.364657] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:34.296 [2024-07-23 22:11:06.401254] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:34.296 [2024-07-23 22:11:06.419131] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:34.296 [2024-07-23 22:11:06.419232] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:34.296 [2024-07-23 22:11:06.457353] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:09:34.554 [2024-07-23 22:11:06.538625] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:34.555 [2024-07-23 22:11:06.538747] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:34.555 [2024-07-23 22:11:06.560814] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:09:34.555 [2024-07-23 22:11:06.560918] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:09:34.555 [2024-07-23 22:11:06.582107] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:34.555 [2024-07-23 22:11:06.582210] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:34.555 [2024-07-23 22:11:06.660742] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:34.555 [2024-07-23 22:11:06.660844] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:09:34.555 [2024-07-23 22:11:06.680779] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:09:34.555 [2024-07-23 22:11:06.722093] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:34.555 [2024-07-23 22:11:06.722138] iscsi.c:3961:iscsi_handle_recovery_datain: *ERROR*: Initiator requests BegRun: 0x00000000, RunLength:0x00001000 greater than maximum DataSN: 0x00000004. 00:09:34.555 [2024-07-23 22:11:06.722150] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=10) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:09:34.555 [2024-07-23 22:11:06.722159] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:09:34.555 [2024-07-23 22:11:06.738910] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:09:34.813 [2024-07-23 22:11:06.754600] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:09:34.813 [2024-07-23 22:11:06.754715] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:09:34.813 [2024-07-23 22:11:06.755046] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:09:34.813 [2024-07-23 22:11:06.755114] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=6, MaxCmdSN=67) 00:09:34.813 [2024-07-23 22:11:06.755571] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:09:34.813 [2024-07-23 22:11:06.772590] iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:34.813 [2024-07-23 22:11:06.772639] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:09:34.813 [2024-07-23 22:11:06.772649] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:09:34.813 [2024-07-23 22:11:06.792883] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:09:35.747 [2024-07-23 22:11:07.833098] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:09:36.681 [2024-07-23 22:11:08.811546] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=6, MaxCmdSN=68) 00:09:36.681 [2024-07-23 22:11:08.812297] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=7 00:09:36.681 [2024-07-23 22:11:08.833414] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=5, MaxCmdSN=68) 00:09:38.056 [2024-07-23 22:11:09.833707] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=6, MaxCmdSN=69) 00:09:38.056 [2024-07-23 22:11:09.833880] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=7, MaxCmdSN=70) 00:09:38.056 [2024-07-23 22:11:09.833897] iscsi.c:4028:iscsi_handle_status_snack: *ERROR*: Unable to find StatSN: 0x00000007. For a StatusSNACK, assuming this is a proactive SNACK for an untransmitted StatSN, ignoring. 00:09:38.056 [2024-07-23 22:11:09.833910] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=8 00:09:50.260 [2024-07-23 22:11:21.880042] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:09:50.260 [2024-07-23 22:11:21.895633] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:09:50.260 [2024-07-23 22:11:21.921247] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:09:50.260 [2024-07-23 22:11:21.921310] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:50.260 [2024-07-23 22:11:21.936166] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:50.260 [2024-07-23 22:11:21.962272] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:50.260 [2024-07-23 22:11:21.984098] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:09:50.260 [2024-07-23 22:11:22.025295] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:50.260 [2024-07-23 22:11:22.026292] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=64 00:09:50.260 [2024-07-23 22:11:22.048038] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1107296256) error ExpCmdSN=66 00:09:50.260 [2024-07-23 22:11:22.067197] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:09:50.260 [2024-07-23 22:11:22.089267] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=67 00:09:50.260 Skipping tc_ffp_15_2. It is known to fail. 00:09:50.260 Skipping tc_ffp_29_2. It is known to fail. 00:09:50.260 Skipping tc_ffp_29_3. It is known to fail. 00:09:50.260 Skipping tc_ffp_29_4. It is known to fail. 00:09:50.260 Skipping tc_err_1_1. It is known to fail. 00:09:50.260 Skipping tc_err_1_2. It is known to fail. 00:09:50.260 Skipping tc_err_2_8. It is known to fail. 00:09:50.260 Skipping tc_err_3_1. It is known to fail. 00:09:50.260 Skipping tc_err_3_2. It is known to fail. 00:09:50.260 Skipping tc_err_3_3. It is known to fail. 00:09:50.260 Skipping tc_err_3_4. It is known to fail. 00:09:50.260 Skipping tc_err_5_1. It is known to fail. 00:09:50.260 Skipping tc_login_3_1. It is known to fail. 00:09:50.260 Skipping tc_login_11_2. It is known to fail. 00:09:50.260 Skipping tc_login_11_4. It is known to fail. 00:09:50.260 Skipping tc_login_2_2. It is known to fail. 00:09:50.260 Skipping tc_login_29_1. It is known to fail. 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@62 -- # failed=0 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:50.260 Cleaning up iSCSI connection 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@67 -- # iscsicleanup 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:09:50.260 iscsiadm: No matching sessions found 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@981 -- # true 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:09:50.260 iscsiadm: No records found 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # true 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@983 -- # rm -rf 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@68 -- # killprocess 78705 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@948 -- # '[' -z 78705 ']' 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@952 -- # kill -0 78705 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@953 -- # uname 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78705 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:50.260 killing process with pid 78705 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78705' 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@967 -- # kill 78705 00:09:50.260 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@972 -- # wait 78705 00:09:50.519 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@69 -- # delete_tmp_conf_files 00:09:50.519 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@12 -- # rm -f /usr/local/etc/its.conf 00:09:50.519 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@70 -- # iscsitestfini 00:09:50.519 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:09:50.519 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@71 -- # exit 0 00:09:50.519 00:09:50.519 real 0m25.451s 00:09:50.519 user 0m38.774s 00:09:50.519 sys 0m5.296s 00:09:50.519 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.519 22:11:22 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:09:50.519 ************************************ 00:09:50.519 END TEST iscsi_tgt_calsoft 00:09:50.519 ************************************ 00:09:50.519 22:11:22 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@31 -- # run_test iscsi_tgt_filesystem /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:09:50.519 22:11:22 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:50.519 22:11:22 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.519 22:11:22 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:50.519 ************************************ 00:09:50.519 START TEST iscsi_tgt_filesystem 00:09:50.519 ************************************ 00:09:50.519 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:09:50.519 * Looking for test storage... 00:09:50.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:50.779 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/setup/common.sh 00:09:50.779 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:09:50.779 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:50.779 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:50.779 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:50.779 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:50.779 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:50.779 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:09:50.779 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:09:50.779 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:09:50.779 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:50.779 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:50.779 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=y 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:09:50.780 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:50.780 #define SPDK_CONFIG_H 00:09:50.780 #define SPDK_CONFIG_APPS 1 00:09:50.780 #define SPDK_CONFIG_ARCH native 00:09:50.780 #undef SPDK_CONFIG_ASAN 00:09:50.780 #undef SPDK_CONFIG_AVAHI 00:09:50.780 #undef SPDK_CONFIG_CET 00:09:50.780 #define SPDK_CONFIG_COVERAGE 1 00:09:50.780 #define SPDK_CONFIG_CROSS_PREFIX 00:09:50.780 #undef SPDK_CONFIG_CRYPTO 00:09:50.780 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:50.780 #undef SPDK_CONFIG_CUSTOMOCF 00:09:50.780 #undef SPDK_CONFIG_DAOS 00:09:50.780 #define SPDK_CONFIG_DAOS_DIR 00:09:50.780 #define SPDK_CONFIG_DEBUG 1 00:09:50.780 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:50.780 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:09:50.780 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:09:50.780 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:09:50.780 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:50.780 #undef SPDK_CONFIG_DPDK_UADK 00:09:50.780 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:50.780 #define SPDK_CONFIG_EXAMPLES 1 00:09:50.780 #undef SPDK_CONFIG_FC 00:09:50.780 #define SPDK_CONFIG_FC_PATH 00:09:50.780 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:50.780 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:50.780 #undef SPDK_CONFIG_FUSE 00:09:50.780 #undef SPDK_CONFIG_FUZZER 00:09:50.780 #define SPDK_CONFIG_FUZZER_LIB 00:09:50.780 #undef SPDK_CONFIG_GOLANG 00:09:50.780 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:50.780 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:50.781 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:50.781 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:50.781 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:50.781 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:50.781 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:50.781 #define SPDK_CONFIG_IDXD 1 00:09:50.781 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:50.781 #undef SPDK_CONFIG_IPSEC_MB 00:09:50.781 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:50.781 #define SPDK_CONFIG_ISAL 1 00:09:50.781 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:50.781 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:50.781 #define SPDK_CONFIG_LIBDIR 00:09:50.781 #undef SPDK_CONFIG_LTO 00:09:50.781 #define SPDK_CONFIG_MAX_LCORES 128 00:09:50.781 #define SPDK_CONFIG_NVME_CUSE 1 00:09:50.781 #undef SPDK_CONFIG_OCF 00:09:50.781 #define SPDK_CONFIG_OCF_PATH 00:09:50.781 #define SPDK_CONFIG_OPENSSL_PATH 00:09:50.781 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:50.781 #define SPDK_CONFIG_PGO_DIR 00:09:50.781 #undef SPDK_CONFIG_PGO_USE 00:09:50.781 #define SPDK_CONFIG_PREFIX /usr/local 00:09:50.781 #undef SPDK_CONFIG_RAID5F 00:09:50.781 #undef SPDK_CONFIG_RBD 00:09:50.781 #define SPDK_CONFIG_RDMA 1 00:09:50.781 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:50.781 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:50.781 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:50.781 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:50.781 #define SPDK_CONFIG_SHARED 1 00:09:50.781 #undef SPDK_CONFIG_SMA 00:09:50.781 #define SPDK_CONFIG_TESTS 1 00:09:50.781 #undef SPDK_CONFIG_TSAN 00:09:50.781 #define SPDK_CONFIG_UBLK 1 00:09:50.781 #define SPDK_CONFIG_UBSAN 1 00:09:50.781 #undef SPDK_CONFIG_UNIT_TESTS 00:09:50.781 #define SPDK_CONFIG_URING 1 00:09:50.781 #define SPDK_CONFIG_URING_PATH 00:09:50.781 #define SPDK_CONFIG_URING_ZNS 1 00:09:50.781 #undef SPDK_CONFIG_USDT 00:09:50.781 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:50.781 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:50.781 #undef SPDK_CONFIG_VFIO_USER 00:09:50.781 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:50.781 #define SPDK_CONFIG_VHOST 1 00:09:50.781 #define SPDK_CONFIG_VIRTIO 1 00:09:50.781 #undef SPDK_CONFIG_VTUNE 00:09:50.781 #define SPDK_CONFIG_VTUNE_DIR 00:09:50.781 #define SPDK_CONFIG_WERROR 1 00:09:50.781 #define SPDK_CONFIG_WPDK_DIR 00:09:50.781 #undef SPDK_CONFIG_XNVME 00:09:50.781 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # uname -s 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@58 -- # : 1 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@70 -- # : 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@76 -- # : 1 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@86 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@92 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:50.781 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@124 -- # : /home/vagrant/spdk_repo/dpdk/build 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@138 -- # : main 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@144 -- # : 1 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@154 -- # : 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@167 -- # : 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@169 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@200 -- # cat 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:50.782 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@318 -- # [[ -z 79418 ]] 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@318 -- # kill -0 79418 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.q0X2hQ 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem /tmp/spdk.q0X2hQ/tests/filesystem /tmp/spdk.q0X2hQ 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@327 -- # df -T 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264512512 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2496167936 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10989568 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12016746496 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5957632000 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12016746496 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5957632000 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267719680 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=172032 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt/output 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:09:50.783 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=94794276864 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4908503040 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:50.784 * Looking for test storage... 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # target_space=12016746496 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:50.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@389 -- # return 0 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@11 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@13 -- # iscsitestinit 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@29 -- # timing_enter start_iscsi_tgt 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@32 -- # pid=79455 00:09:50.784 Process pid: 79455 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@33 -- # echo 'Process pid: 79455' 00:09:50.784 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@35 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:09:50.785 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@37 -- # waitforlisten 79455 00:09:50.785 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@829 -- # '[' -z 79455 ']' 00:09:50.785 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.785 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:09:50.785 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.785 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.785 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.785 22:11:22 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:50.785 [2024-07-23 22:11:22.905568] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:09:50.785 [2024-07-23 22:11:22.905659] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79455 ] 00:09:51.043 [2024-07-23 22:11:23.026774] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:51.043 [2024-07-23 22:11:23.040856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.043 [2024-07-23 22:11:23.093193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.043 [2024-07-23 22:11:23.093225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.043 [2024-07-23 22:11:23.093400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.043 [2024-07-23 22:11:23.093402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.043 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:51.043 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@862 -- # return 0 00:09:51.043 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@38 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:09:51.043 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.043 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.043 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.043 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@39 -- # rpc_cmd framework_start_init 00:09:51.043 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.043 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.043 [2024-07-23 22:11:23.219473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.301 iscsi_tgt is listening. Running tests... 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@40 -- # echo 'iscsi_tgt is listening. Running tests...' 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@42 -- # timing_exit start_iscsi_tgt 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # get_first_nvme_bdf 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1522 -- # bdfs=() 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1522 -- # local bdfs 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1511 -- # bdfs=() 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1511 -- # local bdfs 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # (( 2 == 0 )) 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # echo 0000:00:10.0 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # bdf=0000:00:10.0 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@45 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@46 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@47 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:00:10.0 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.301 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.559 Nvme0n1 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # rpc_cmd bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # ls_guid=668bda41-e7d0-4b52-b42d-b76f5330b087 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # get_lvs_free_mb 668bda41-e7d0-4b52-b42d-b76f5330b087 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1362 -- # local lvs_uuid=668bda41-e7d0-4b52-b42d-b76f5330b087 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1363 -- # local lvs_info 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1364 -- # local fc 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1365 -- # local cs 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1366 -- # rpc_cmd bdev_lvol_get_lvstores 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:09:51.559 { 00:09:51.559 "uuid": "668bda41-e7d0-4b52-b42d-b76f5330b087", 00:09:51.559 "name": "lvs_0", 00:09:51.559 "base_bdev": "Nvme0n1", 00:09:51.559 "total_data_clusters": 1278, 00:09:51.559 "free_clusters": 1278, 00:09:51.559 "block_size": 4096, 00:09:51.559 "cluster_size": 4194304 00:09:51.559 } 00:09:51.559 ]' 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="668bda41-e7d0-4b52-b42d-b76f5330b087") .free_clusters' 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1367 -- # fc=1278 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="668bda41-e7d0-4b52-b42d-b76f5330b087") .cluster_size' 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # cs=4194304 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1371 -- # free_mb=5112 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1372 -- # echo 5112 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # free_mb=5112 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@52 -- # '[' 5112 -gt 2048 ']' 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@53 -- # rpc_cmd bdev_lvol_create -u 668bda41-e7d0-4b52-b42d-b76f5330b087 lbd_0 2048 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.559 fe591751-4e7a-4873-9f2a-c05e9b8ef93a 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@61 -- # lvol_name=lvs_0/lbd_0 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@62 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias lvs_0/lbd_0:0 1:2 256 -d 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.559 22:11:23 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@63 -- # sleep 1 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@65 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:09:52.934 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@66 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:09:52.934 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:52.934 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@67 -- # waitforiscsidevices 1 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@116 -- # local num=1 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:52.934 [2024-07-23 22:11:24.824888] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # n=1 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@123 -- # return 0 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # get_bdev_size lvs_0/lbd_0 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1376 -- # local bdev_name=lvs_0/lbd_0 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1377 -- # local bdev_info 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1378 -- # local bs 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1379 -- # local nb 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b lvs_0/lbd_0 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.934 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:09:52.934 { 00:09:52.934 "name": "fe591751-4e7a-4873-9f2a-c05e9b8ef93a", 00:09:52.934 "aliases": [ 00:09:52.934 "lvs_0/lbd_0" 00:09:52.934 ], 00:09:52.934 "product_name": "Logical Volume", 00:09:52.934 "block_size": 4096, 00:09:52.934 "num_blocks": 524288, 00:09:52.934 "uuid": "fe591751-4e7a-4873-9f2a-c05e9b8ef93a", 00:09:52.934 "assigned_rate_limits": { 00:09:52.934 "rw_ios_per_sec": 0, 00:09:52.934 "rw_mbytes_per_sec": 0, 00:09:52.935 "r_mbytes_per_sec": 0, 00:09:52.935 "w_mbytes_per_sec": 0 00:09:52.935 }, 00:09:52.935 "claimed": false, 00:09:52.935 "zoned": false, 00:09:52.935 "supported_io_types": { 00:09:52.935 "read": true, 00:09:52.935 "write": true, 00:09:52.935 "unmap": true, 00:09:52.935 "flush": false, 00:09:52.935 "reset": true, 00:09:52.935 "nvme_admin": false, 00:09:52.935 "nvme_io": false, 00:09:52.935 "nvme_io_md": false, 00:09:52.935 "write_zeroes": true, 00:09:52.935 "zcopy": false, 00:09:52.935 "get_zone_info": false, 00:09:52.935 "zone_management": false, 00:09:52.935 "zone_append": false, 00:09:52.935 "compare": false, 00:09:52.935 "compare_and_write": false, 00:09:52.935 "abort": false, 00:09:52.935 "seek_hole": true, 00:09:52.935 "seek_data": true, 00:09:52.935 "copy": false, 00:09:52.935 "nvme_iov_md": false 00:09:52.935 }, 00:09:52.935 "driver_specific": { 00:09:52.935 "lvol": { 00:09:52.935 "lvol_store_uuid": "668bda41-e7d0-4b52-b42d-b76f5330b087", 00:09:52.935 "base_bdev": "Nvme0n1", 00:09:52.935 "thin_provision": false, 00:09:52.935 "num_allocated_clusters": 512, 00:09:52.935 "snapshot": false, 00:09:52.935 "clone": false, 00:09:52.935 "esnap_clone": false 00:09:52.935 } 00:09:52.935 } 00:09:52.935 } 00:09:52.935 ]' 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1381 -- # bs=4096 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # nb=524288 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1385 -- # bdev_size=2048 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1386 -- # echo 2048 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # lvol_size=2147483648 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@70 -- # trap 'iscsicleanup; remove_backends; umount /mnt/device; rm -rf /mnt/device; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@72 -- # mkdir -p /mnt/device 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # iscsiadm -m session -P 3 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # grep 'Attached scsi disk' 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # awk '{print $4}' 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # dev=sda 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@76 -- # waitforfile /dev/sda 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1263 -- # local i=0 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1264 -- # '[' '!' -e /dev/sda ']' 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1270 -- # '[' '!' -e /dev/sda ']' 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1274 -- # return 0 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # sec_size_to_bytes sda 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@76 -- # local dev=sda 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@78 -- # [[ -e /sys/block/sda ]] 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@80 -- # echo 2147483648 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # dev_size=2147483648 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@80 -- # (( lvol_size == dev_size )) 00:09:52.935 22:11:24 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@81 -- # parted -s /dev/sda mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:52.935 22:11:25 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@82 -- # sleep 1 00:09:52.935 [2024-07-23 22:11:25.018858] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:53.915 22:11:26 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@144 -- # run_test iscsi_tgt_filesystem_ext4 filesystem_test ext4 00:09:53.915 22:11:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:53.915 22:11:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.915 22:11:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:53.915 ************************************ 00:09:53.915 START TEST iscsi_tgt_filesystem_ext4 00:09:53.915 ************************************ 00:09:53.915 22:11:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1123 -- # filesystem_test ext4 00:09:53.915 22:11:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@89 -- # fstype=ext4 00:09:53.915 22:11:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@91 -- # make_filesystem ext4 /dev/sda1 00:09:53.915 22:11:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:53.915 22:11:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:09:53.915 22:11:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:53.915 22:11:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:53.915 22:11:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:53.915 22:11:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:53.915 22:11:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda1 00:09:53.915 mke2fs 1.46.5 (30-Dec-2021) 00:09:53.915 Discarding device blocks: 0/522240 done 00:09:53.915 Creating filesystem with 522240 4k blocks and 130560 inodes 00:09:53.915 Filesystem UUID: 15b689cc-6a38-4b44-934d-abb34d46f5c3 00:09:53.915 Superblock backups stored on blocks: 00:09:53.915 32768, 98304, 163840, 229376, 294912 00:09:53.915 00:09:53.915 Allocating group tables: 0/16 done 00:09:53.915 Writing inode tables: 0/16 done 00:09:54.173 Creating journal (8192 blocks): done 00:09:54.174 Writing superblocks and filesystem accounting information: 0/16 done 00:09:54.174 00:09:54.174 22:11:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:54.174 22:11:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:09:54.174 22:11:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:09:54.174 22:11:26 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:09:54.432 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:09:54.432 fio-3.35 00:09:54.432 Starting 1 thread 00:09:54.432 job0: Laying out IO file (1 file / 1024MiB) 00:10:09.311 00:10:09.311 job0: (groupid=0, jobs=1): err= 0: pid=79604: Tue Jul 23 22:11:38 2024 00:10:09.311 write: IOPS=21.2k, BW=82.9MiB/s (87.0MB/s)(1024MiB/12348msec); 0 zone resets 00:10:09.311 slat (usec): min=4, max=31316, avg=17.45, stdev=136.82 00:10:09.311 clat (usec): min=932, max=43083, avg=2995.68, stdev=1596.88 00:10:09.311 lat (usec): min=944, max=43094, avg=3013.13, stdev=1608.20 00:10:09.311 clat percentiles (usec): 00:10:09.311 | 1.00th=[ 1762], 5.00th=[ 1975], 10.00th=[ 2057], 20.00th=[ 2180], 00:10:09.311 | 30.00th=[ 2376], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 2999], 00:10:09.311 | 70.00th=[ 3359], 80.00th=[ 3556], 90.00th=[ 3785], 95.00th=[ 4228], 00:10:09.311 | 99.00th=[ 5080], 99.50th=[ 7308], 99.90th=[22938], 99.95th=[38536], 00:10:09.311 | 99.99th=[41157] 00:10:09.311 bw ( KiB/s): min=73576, max=93040, per=99.70%, avg=84663.00, stdev=6141.69, samples=24 00:10:09.311 iops : min=18394, max=23260, avg=21165.75, stdev=1535.42, samples=24 00:10:09.311 lat (usec) : 1000=0.01% 00:10:09.311 lat (msec) : 2=6.27%, 4=86.68%, 10=6.60%, 20=0.25%, 50=0.20% 00:10:09.311 cpu : usr=5.17%, sys=25.12%, ctx=18204, majf=0, minf=1 00:10:09.311 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:09.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:10:09.311 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.311 latency : target=0, window=0, percentile=100.00%, depth=64 00:10:09.311 00:10:09.311 Run status group 0 (all jobs): 00:10:09.311 WRITE: bw=82.9MiB/s (87.0MB/s), 82.9MiB/s-82.9MiB/s (87.0MB/s-87.0MB/s), io=1024MiB (1074MB), run=12348-12348msec 00:10:09.311 00:10:09.311 Disk stats (read/write): 00:10:09.311 sda: ios=0/256272, merge=0/1751, ticks=0/672986, in_queue=672987, util=99.21% 00:10:09.311 22:11:38 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:10:09.311 22:11:38 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:10:09.311 Logging out of session [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:09.311 Logout of [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=0 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:09.311 iscsiadm: No active sessions. 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # true 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=0 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:10:09.311 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:09.311 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=1 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:09.311 [2024-07-23 22:11:39.061567] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=1 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # dev=sda 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1263 -- # local i=0 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1264 -- # '[' '!' -e /dev/sda1 ']' 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1270 -- # '[' '!' -e /dev/sda1 ']' 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1274 -- # return 0 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:10:09.311 File existed. 00:10:09.311 22:11:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:10:09.311 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:10:09.311 fio-3.35 00:10:09.311 Starting 1 thread 00:10:27.394 00:10:27.394 job0: (groupid=0, jobs=1): err= 0: pid=79845: Tue Jul 23 22:11:59 2024 00:10:27.394 read: IOPS=21.5k, BW=83.9MiB/s (88.0MB/s)(1679MiB/20003msec) 00:10:27.394 slat (usec): min=3, max=2688, avg= 7.82, stdev=29.61 00:10:27.394 clat (usec): min=796, max=21158, avg=2968.08, stdev=821.57 00:10:27.394 lat (usec): min=803, max=22503, avg=2975.90, stdev=826.50 00:10:27.394 clat percentiles (usec): 00:10:27.394 | 1.00th=[ 1893], 5.00th=[ 2073], 10.00th=[ 2147], 20.00th=[ 2212], 00:10:27.394 | 30.00th=[ 2311], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 3032], 00:10:27.394 | 70.00th=[ 3425], 80.00th=[ 3687], 90.00th=[ 3785], 95.00th=[ 3949], 00:10:27.394 | 99.00th=[ 4686], 99.50th=[ 5080], 99.90th=[11731], 99.95th=[14091], 00:10:27.394 | 99.99th=[18220] 00:10:27.394 bw ( KiB/s): min=56256, max=95296, per=100.00%, avg=86069.26, stdev=5674.35, samples=39 00:10:27.394 iops : min=14064, max=23824, avg=21517.26, stdev=1418.60, samples=39 00:10:27.394 lat (usec) : 1000=0.02% 00:10:27.394 lat (msec) : 2=2.47%, 4=93.03%, 10=4.34%, 20=0.13%, 50=0.01% 00:10:27.394 cpu : usr=5.81%, sys=14.56%, ctx=27718, majf=0, minf=65 00:10:27.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:27.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:10:27.395 issued rwts: total=429806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:10:27.395 00:10:27.395 Run status group 0 (all jobs): 00:10:27.395 READ: bw=83.9MiB/s (88.0MB/s), 83.9MiB/s-83.9MiB/s (88.0MB/s-88.0MB/s), io=1679MiB (1760MB), run=20003-20003msec 00:10:27.395 00:10:27.395 Disk stats (read/write): 00:10:27.395 sda: ios=426799/5, merge=1291/2, ticks=1179096/5, in_queue=1179100, util=99.66% 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:10:27.395 00:10:27.395 real 0m33.344s 00:10:27.395 user 0m2.065s 00:10:27.395 sys 0m6.258s 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:27.395 ************************************ 00:10:27.395 END TEST iscsi_tgt_filesystem_ext4 00:10:27.395 ************************************ 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@145 -- # run_test iscsi_tgt_filesystem_btrfs filesystem_test btrfs 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.395 ************************************ 00:10:27.395 START TEST iscsi_tgt_filesystem_btrfs 00:10:27.395 ************************************ 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1123 -- # filesystem_test btrfs 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@89 -- # fstype=btrfs 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@91 -- # make_filesystem btrfs /dev/sda1 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/sda1 00:10:27.395 btrfs-progs v6.6.2 00:10:27.395 See https://btrfs.readthedocs.io for more information. 00:10:27.395 00:10:27.395 Performing full device TRIM /dev/sda1 (1.99GiB) ... 00:10:27.395 NOTE: several default settings have changed in version 5.15, please make sure 00:10:27.395 this does not affect your deployments: 00:10:27.395 - DUP for metadata (-m dup) 00:10:27.395 - enabled no-holes (-O no-holes) 00:10:27.395 - enabled free-space-tree (-R free-space-tree) 00:10:27.395 00:10:27.395 Label: (null) 00:10:27.395 UUID: 1fe4e67b-adfe-42d4-8dbe-c87bceffa76a 00:10:27.395 Node size: 16384 00:10:27.395 Sector size: 4096 00:10:27.395 Filesystem size: 1.99GiB 00:10:27.395 Block group profiles: 00:10:27.395 Data: single 8.00MiB 00:10:27.395 Metadata: DUP 102.00MiB 00:10:27.395 System: DUP 8.00MiB 00:10:27.395 SSD detected: yes 00:10:27.395 Zoned device: no 00:10:27.395 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:27.395 Runtime features: free-space-tree 00:10:27.395 Checksum: crc32c 00:10:27.395 Number of devices: 1 00:10:27.395 Devices: 00:10:27.395 ID SIZE PATH 00:10:27.395 1 1.99GiB /dev/sda1 00:10:27.395 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:10:27.395 22:11:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:10:27.653 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:10:27.653 fio-3.35 00:10:27.653 Starting 1 thread 00:10:27.653 job0: Laying out IO file (1 file / 1024MiB) 00:10:45.729 00:10:45.729 job0: (groupid=0, jobs=1): err= 0: pid=80107: Tue Jul 23 22:12:15 2024 00:10:45.729 write: IOPS=16.7k, BW=65.4MiB/s (68.6MB/s)(1024MiB/15657msec); 0 zone resets 00:10:45.729 slat (usec): min=10, max=4279, avg=51.25, stdev=106.07 00:10:45.729 clat (usec): min=261, max=13441, avg=3768.11, stdev=1471.16 00:10:45.729 lat (usec): min=306, max=13459, avg=3819.36, stdev=1486.82 00:10:45.729 clat percentiles (usec): 00:10:45.729 | 1.00th=[ 1745], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2638], 00:10:45.729 | 30.00th=[ 2900], 40.00th=[ 3097], 50.00th=[ 3359], 60.00th=[ 3654], 00:10:45.729 | 70.00th=[ 4047], 80.00th=[ 4948], 90.00th=[ 5932], 95.00th=[ 6783], 00:10:45.729 | 99.00th=[ 8291], 99.50th=[ 8848], 99.90th=[10159], 99.95th=[10683], 00:10:45.729 | 99.99th=[11994] 00:10:45.729 bw ( KiB/s): min=50120, max=77856, per=100.00%, avg=66981.35, stdev=8743.59, samples=31 00:10:45.729 iops : min=12530, max=19466, avg=16745.32, stdev=2185.98, samples=31 00:10:45.729 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:45.729 lat (msec) : 2=3.23%, 4=65.99%, 10=30.66%, 20=0.11% 00:10:45.729 cpu : usr=5.81%, sys=41.29%, ctx=64843, majf=0, minf=1 00:10:45.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:45.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:10:45.729 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.729 latency : target=0, window=0, percentile=100.00%, depth=64 00:10:45.729 00:10:45.729 Run status group 0 (all jobs): 00:10:45.729 WRITE: bw=65.4MiB/s (68.6MB/s), 65.4MiB/s-65.4MiB/s (68.6MB/s-68.6MB/s), io=1024MiB (1074MB), run=15657-15657msec 00:10:45.729 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:10:45.729 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:10:45.730 Logging out of session [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:45.730 Logout of [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:45.730 iscsiadm: No active sessions. 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # true 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=0 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:10:45.730 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:45.730 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:45.730 [2024-07-23 22:12:15.756537] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=1 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # dev=sda 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1263 -- # local i=0 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1264 -- # '[' '!' -e /dev/sda1 ']' 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1270 -- # '[' '!' -e /dev/sda1 ']' 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1274 -- # return 0 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:10:45.730 File existed. 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:10:45.730 22:12:15 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:10:45.730 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:10:45.730 fio-3.35 00:10:45.730 Starting 1 thread 00:11:07.656 00:11:07.656 job0: (groupid=0, jobs=1): err= 0: pid=80334: Tue Jul 23 22:12:36 2024 00:11:07.656 read: IOPS=21.1k, BW=82.5MiB/s (86.5MB/s)(1650MiB/20003msec) 00:11:07.656 slat (usec): min=5, max=2032, avg= 9.80, stdev=13.76 00:11:07.656 clat (usec): min=996, max=22732, avg=3017.36, stdev=743.26 00:11:07.656 lat (usec): min=1031, max=23606, avg=3027.16, stdev=746.35 00:11:07.656 clat percentiles (usec): 00:11:07.656 | 1.00th=[ 2073], 5.00th=[ 2147], 10.00th=[ 2180], 20.00th=[ 2278], 00:11:07.656 | 30.00th=[ 2343], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3064], 00:11:07.656 | 70.00th=[ 3589], 80.00th=[ 3720], 90.00th=[ 3818], 95.00th=[ 3916], 00:11:07.656 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 6849], 99.95th=[ 9241], 00:11:07.656 | 99.99th=[18220] 00:11:07.656 bw ( KiB/s): min=69440, max=92844, per=100.00%, avg=84569.10, stdev=2939.52, samples=39 00:11:07.656 iops : min=17360, max=23211, avg=21142.31, stdev=734.87, samples=39 00:11:07.656 lat (usec) : 1000=0.01% 00:11:07.656 lat (msec) : 2=0.16%, 4=95.67%, 10=4.13%, 20=0.04%, 50=0.01% 00:11:07.656 cpu : usr=5.41%, sys=20.20%, ctx=46771, majf=0, minf=65 00:11:07.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:07.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:07.656 issued rwts: total=422507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.656 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:07.656 00:11:07.656 Run status group 0 (all jobs): 00:11:07.656 READ: bw=82.5MiB/s (86.5MB/s), 82.5MiB/s-82.5MiB/s (86.5MB/s-86.5MB/s), io=1650MiB (1731MB), run=20003-20003msec 00:11:07.656 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:11:07.657 ************************************ 00:11:07.657 END TEST iscsi_tgt_filesystem_btrfs 00:11:07.657 ************************************ 00:11:07.657 00:11:07.657 real 0m36.700s 00:11:07.657 user 0m2.250s 00:11:07.657 sys 0m10.889s 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@146 -- # run_test iscsi_tgt_filesystem_xfs filesystem_test xfs 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:07.657 ************************************ 00:11:07.657 START TEST iscsi_tgt_filesystem_xfs 00:11:07.657 ************************************ 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1123 -- # filesystem_test xfs 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@89 -- # fstype=xfs 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@91 -- # make_filesystem xfs /dev/sda1 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/sda1 00:11:07.657 meta-data=/dev/sda1 isize=512 agcount=4, agsize=130560 blks 00:11:07.657 = sectsz=4096 attr=2, projid32bit=1 00:11:07.657 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:07.657 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:07.657 data = bsize=4096 blocks=522240, imaxpct=25 00:11:07.657 = sunit=0 swidth=0 blks 00:11:07.657 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:07.657 log =internal log bsize=4096 blocks=16384, version=2 00:11:07.657 = sectsz=4096 sunit=1 blks, lazy-count=1 00:11:07.657 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:07.657 Discarding blocks...Done. 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:07.657 22:12:36 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:11:07.657 22:12:37 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:11:07.657 22:12:37 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:11:07.657 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:11:07.657 fio-3.35 00:11:07.657 Starting 1 thread 00:11:07.657 job0: Laying out IO file (1 file / 1024MiB) 00:11:17.633 00:11:17.633 job0: (groupid=0, jobs=1): err= 0: pid=80592: Tue Jul 23 22:12:49 2024 00:11:17.633 write: IOPS=21.2k, BW=82.9MiB/s (87.0MB/s)(1024MiB/12345msec); 0 zone resets 00:11:17.633 slat (usec): min=3, max=4519, avg=16.26, stdev=84.32 00:11:17.633 clat (usec): min=1101, max=9066, avg=2996.69, stdev=723.88 00:11:17.633 lat (usec): min=1105, max=9639, avg=3012.96, stdev=728.27 00:11:17.633 clat percentiles (usec): 00:11:17.633 | 1.00th=[ 1827], 5.00th=[ 2114], 10.00th=[ 2147], 20.00th=[ 2245], 00:11:17.633 | 30.00th=[ 2442], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 3032], 00:11:17.633 | 70.00th=[ 3425], 80.00th=[ 3654], 90.00th=[ 3851], 95.00th=[ 4228], 00:11:17.633 | 99.00th=[ 4883], 99.50th=[ 5407], 99.90th=[ 6587], 99.95th=[ 7570], 00:11:17.633 | 99.99th=[ 8848] 00:11:17.633 bw ( KiB/s): min=71488, max=87584, per=99.85%, avg=84808.67, stdev=3013.76, samples=24 00:11:17.633 iops : min=17872, max=21896, avg=21202.17, stdev=753.44, samples=24 00:11:17.633 lat (msec) : 2=2.29%, 4=90.18%, 10=7.53% 00:11:17.633 cpu : usr=5.07%, sys=13.65%, ctx=17358, majf=0, minf=1 00:11:17.633 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:17.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:17.633 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.633 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:17.633 00:11:17.633 Run status group 0 (all jobs): 00:11:17.633 WRITE: bw=82.9MiB/s (87.0MB/s), 82.9MiB/s-82.9MiB/s (87.0MB/s-87.0MB/s), io=1024MiB (1074MB), run=12345-12345msec 00:11:17.633 00:11:17.633 Disk stats (read/write): 00:11:17.633 sda: ios=0/256646, merge=0/496, ticks=0/671889, in_queue=671888, util=99.23% 00:11:17.633 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:11:17.892 Logging out of session [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:17.892 Logout of [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:17.892 iscsiadm: No active sessions. 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # true 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=0 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:11:17.892 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:17.892 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:17.892 [2024-07-23 22:12:49.966429] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=1 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # dev=sda 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1263 -- # local i=0 00:11:17.892 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1264 -- # '[' '!' -e /dev/sda1 ']' 00:11:17.893 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1270 -- # '[' '!' -e /dev/sda1 ']' 00:11:17.893 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1274 -- # return 0 00:11:17.893 22:12:49 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:11:17.893 22:12:50 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:11:17.893 22:12:50 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:11:17.893 File existed. 00:11:17.893 22:12:50 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:11:18.150 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:11:18.150 fio-3.35 00:11:18.150 Starting 1 thread 00:11:40.084 00:11:40.084 job0: (groupid=0, jobs=1): err= 0: pid=80794: Tue Jul 23 22:13:10 2024 00:11:40.084 read: IOPS=23.0k, BW=90.0MiB/s (94.4MB/s)(1800MiB/20003msec) 00:11:40.084 slat (usec): min=3, max=555, avg= 6.15, stdev= 7.57 00:11:40.084 clat (usec): min=982, max=9219, avg=2771.16, stdev=650.48 00:11:40.084 lat (usec): min=1074, max=9224, avg=2777.31, stdev=650.24 00:11:40.084 clat percentiles (usec): 00:11:40.084 | 1.00th=[ 1762], 5.00th=[ 1876], 10.00th=[ 1958], 20.00th=[ 2114], 00:11:40.084 | 30.00th=[ 2245], 40.00th=[ 2540], 50.00th=[ 2737], 60.00th=[ 2933], 00:11:40.084 | 70.00th=[ 3163], 80.00th=[ 3392], 90.00th=[ 3687], 95.00th=[ 3785], 00:11:40.084 | 99.00th=[ 4424], 99.50th=[ 4490], 99.90th=[ 4752], 99.95th=[ 5145], 00:11:40.084 | 99.99th=[ 6783] 00:11:40.084 bw ( KiB/s): min=85576, max=104560, per=100.00%, avg=92425.54, stdev=5626.75, samples=39 00:11:40.084 iops : min=21394, max=26140, avg=23106.49, stdev=1406.78, samples=39 00:11:40.084 lat (usec) : 1000=0.01% 00:11:40.084 lat (msec) : 2=12.98%, 4=84.63%, 10=2.39% 00:11:40.084 cpu : usr=5.66%, sys=13.10%, ctx=30068, majf=0, minf=65 00:11:40.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:11:40.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:40.084 issued rwts: total=460816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:40.084 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:40.084 00:11:40.084 Run status group 0 (all jobs): 00:11:40.084 READ: bw=90.0MiB/s (94.4MB/s), 90.0MiB/s-90.0MiB/s (94.4MB/s-94.4MB/s), io=1800MiB (1888MB), run=20003-20003msec 00:11:40.084 00:11:40.084 Disk stats (read/write): 00:11:40.084 sda: ios=457137/0, merge=1310/0, ticks=1178270/0, in_queue=1178270, util=99.63% 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:11:40.084 00:11:40.084 real 0m34.170s 00:11:40.084 user 0m2.035s 00:11:40.084 sys 0m4.560s 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:40.084 ************************************ 00:11:40.084 END TEST iscsi_tgt_filesystem_xfs 00:11:40.084 ************************************ 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@148 -- # rm -rf /mnt/device 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@152 -- # iscsicleanup 00:11:40.084 Cleaning up iSCSI connection 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:11:40.084 Logging out of session [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:40.084 Logout of [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@983 -- # rm -rf 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@153 -- # remove_backends 00:11:40.084 INFO: Removing lvol bdev 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@17 -- # echo 'INFO: Removing lvol bdev' 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@18 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.084 [2024-07-23 22:13:10.467851] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (fe591751-4e7a-4873-9f2a-c05e9b8ef93a) received event(SPDK_BDEV_EVENT_REMOVE) 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@20 -- # echo 'INFO: Removing lvol stores' 00:11:40.084 INFO: Removing lvol stores 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@21 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.084 INFO: Removing NVMe 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@23 -- # echo 'INFO: Removing NVMe' 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@24 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@26 -- # return 0 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@154 -- # killprocess 79455 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@948 -- # '[' -z 79455 ']' 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@952 -- # kill -0 79455 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@953 -- # uname 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79455 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:40.084 killing process with pid 79455 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79455' 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@967 -- # kill 79455 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@972 -- # wait 79455 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@155 -- # iscsitestfini 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:40.084 00:11:40.084 real 1m48.241s 00:11:40.084 user 6m54.144s 00:11:40.084 sys 0m35.577s 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.084 22:13:10 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.084 ************************************ 00:11:40.084 END TEST iscsi_tgt_filesystem 00:11:40.084 ************************************ 00:11:40.084 22:13:10 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@32 -- # run_test chap_during_discovery /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:11:40.085 22:13:10 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:40.085 22:13:10 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.085 22:13:10 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:40.085 ************************************ 00:11:40.085 START TEST chap_during_discovery 00:11:40.085 ************************************ 00:11:40.085 22:13:10 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:11:40.085 * Looking for test storage... 00:11:40.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@13 -- # USER=chapo 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@14 -- # MUSER=mchapo 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@15 -- # PASS=123456789123 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@16 -- # MPASS=321978654321 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@19 -- # iscsitestinit 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@21 -- # set_up_iscsi_target 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@142 -- # pid=81091 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 81091' 00:11:40.085 iSCSI target launched. pid: 81091 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@145 -- # waitforlisten 81091 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@829 -- # '[' -z 81091 ']' 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:40.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:40.085 22:13:11 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.085 [2024-07-23 22:13:11.073788] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:11:40.085 [2024-07-23 22:13:11.073876] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81091 ] 00:11:40.085 [2024-07-23 22:13:11.314023] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:40.085 [2024-07-23 22:13:11.330067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.085 [2024-07-23 22:13:11.357210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@862 -- # return 0 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.085 [2024-07-23 22:13:12.044710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.085 iscsi_tgt is listening. Running tests... 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.085 Malloc0 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.085 22:13:12 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@155 -- # sleep 1 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:41.461 configuring target for bideerctional authentication 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@24 -- # echo 'configuring target for bideerctional authentication' 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.461 22:13:13 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@95 -- # '[' 0 -eq 1 ']' 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 1 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.462 executing discovery without adding credential to initiator - we expect failure 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@27 -- # rc=0 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:11:41.462 iscsiadm: Login failed to authenticate with target 00:11:41.462 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:11:41.462 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:11:41.462 configuring initiator for bideerctional authentication 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # rc=24 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@29 -- # '[' 24 -eq 0 ']' 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@35 -- # echo 'configuring initiator for bideerctional authentication' 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@36 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:11:41.462 iscsiadm: No matching sessions found 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:11:41.462 iscsiadm: No records found 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # true 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:11:41.462 22:13:13 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:11:44.750 22:13:16 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:11:44.750 22:13:16 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:11:45.318 22:13:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:45.318 22:13:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@116 -- # '[' 0 -eq 1 ']' 00:11:45.318 22:13:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:11:45.318 22:13:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:45.318 22:13:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:11:45.318 22:13:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:11:45.318 22:13:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:11:45.318 22:13:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:11:45.318 22:13:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:11:45.318 22:13:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:11:45.318 22:13:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:11:45.318 22:13:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@135 -- # restart_iscsid 00:11:45.318 22:13:17 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:11:48.607 22:13:20 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:11:48.607 22:13:20 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:11:49.543 executing discovery with adding credential to initiator 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@37 -- # echo 'executing discovery with adding credential to initiator' 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@38 -- # rc=0 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@39 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:11:49.543 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:11:49.543 DONE 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@40 -- # '[' 0 -ne 0 ']' 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@44 -- # echo DONE 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@45 -- # default_initiator_chap_credentials 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:11:49.543 iscsiadm: No matching sessions found 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:49.543 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:49.544 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:49.544 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:49.544 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:49.544 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:49.544 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:49.544 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:11:49.544 22:13:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:11:52.833 22:13:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:11:52.833 22:13:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:11:53.769 22:13:25 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:53.769 22:13:25 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:11:53.769 22:13:25 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@49 -- # killprocess 81091 00:11:53.769 22:13:25 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@948 -- # '[' -z 81091 ']' 00:11:53.769 22:13:25 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@952 -- # kill -0 81091 00:11:53.769 22:13:25 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@953 -- # uname 00:11:53.769 22:13:25 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:53.769 22:13:25 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81091 00:11:53.769 killing process with pid 81091 00:11:53.769 22:13:25 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:53.769 22:13:25 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:53.769 22:13:25 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81091' 00:11:53.769 22:13:25 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@967 -- # kill 81091 00:11:53.769 22:13:25 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@972 -- # wait 81091 00:11:54.028 22:13:26 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@51 -- # iscsitestfini 00:11:54.028 22:13:26 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:54.028 00:11:54.028 real 0m15.148s 00:11:54.028 user 0m15.234s 00:11:54.028 sys 0m0.649s 00:11:54.028 22:13:26 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.028 22:13:26 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.028 ************************************ 00:11:54.028 END TEST chap_during_discovery 00:11:54.028 ************************************ 00:11:54.028 22:13:26 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@33 -- # run_test chap_mutual_auth /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:11:54.028 22:13:26 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:54.028 22:13:26 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.028 22:13:26 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:54.028 ************************************ 00:11:54.028 START TEST chap_mutual_auth 00:11:54.028 ************************************ 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:11:54.028 * Looking for test storage... 00:11:54.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:54.028 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@13 -- # USER=chapo 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@14 -- # MUSER=mchapo 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@15 -- # PASS=123456789123 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@16 -- # MPASS=321978654321 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@19 -- # iscsitestinit 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@21 -- # set_up_iscsi_target 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:54.029 22:13:26 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:54.288 iSCSI target launched. pid: 81365 00:11:54.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.288 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@142 -- # pid=81365 00:11:54.288 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 81365' 00:11:54.288 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:11:54.288 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:11:54.288 22:13:26 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@145 -- # waitforlisten 81365 00:11:54.288 22:13:26 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@829 -- # '[' -z 81365 ']' 00:11:54.288 22:13:26 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.288 22:13:26 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:54.288 22:13:26 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.288 22:13:26 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:54.288 22:13:26 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:54.288 [2024-07-23 22:13:26.281379] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:11:54.288 [2024-07-23 22:13:26.281757] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81365 ] 00:11:54.546 [2024-07-23 22:13:26.523848] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:54.546 [2024-07-23 22:13:26.541156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.546 [2024-07-23 22:13:26.568811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.114 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:55.114 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@862 -- # return 0 00:11:55.114 22:13:27 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:11:55.114 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.114 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:55.114 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.114 22:13:27 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:11:55.114 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.114 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:55.114 [2024-07-23 22:13:27.264409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:55.373 iscsi_tgt is listening. Running tests... 00:11:55.373 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:55.374 Malloc0 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.374 22:13:27 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@155 -- # sleep 1 00:11:56.311 configuring target for authentication 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@24 -- # echo 'configuring target for authentication' 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 0 -eq 1 ']' 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@99 -- # rpc_cmd iscsi_target_node_set_auth -g 1 -r iqn.2016-06.io.spdk:disk1 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 0 -eq 1 ']' 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@106 -- # rpc_cmd iscsi_set_discovery_auth -r -g 1 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:11:56.311 executing discovery without adding credential to initiator - we expect failure 00:11:56.311 configuring initiator with biderectional authentication 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@28 -- # echo 'configuring initiator with biderectional authentication' 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@29 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.311 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:11:56.571 iscsiadm: No matching sessions found 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # true 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:11:56.571 iscsiadm: No records found 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # true 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:11:56.571 22:13:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:11:59.861 22:13:31 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:11:59.861 22:13:31 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@116 -- # '[' 1 -eq 1 ']' 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@117 -- # sed -i 's/#node.session.auth.authmethod = CHAP/node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@118 -- # sed -i 's/#node.session.auth.username =.*/node.session.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@119 -- # sed -i 's/#node.session.auth.password =.*/node.session.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' 1 -eq 1 ']' 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n 321978654321 ']' 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n mchapo ']' 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@121 -- # sed -i 's/#node.session.auth.username_in =.*/node.session.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@122 -- # sed -i 's/#node.session.auth.password_in =.*/node.session.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@135 -- # restart_iscsid 00:12:00.796 22:13:32 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:12:04.083 22:13:35 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:12:04.083 22:13:35 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:12:04.649 executing discovery - target should not be discovered since the -m option was not used 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@30 -- # echo 'executing discovery - target should not be discovered since the -m option was not used' 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@31 -- # rc=0 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:04.649 [2024-07-23 22:13:36.809389] iscsi.c: 982:iscsi_auth_params: *ERROR*: Initiator wants to use mutual CHAP for security, but it's not enabled. 00:12:04.649 [2024-07-23 22:13:36.809429] iscsi.c:1957:iscsi_op_login_rsp_handle_csg_bit: *ERROR*: iscsi_auth_params() failed 00:12:04.649 iscsiadm: Login failed to authenticate with target 00:12:04.649 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:12:04.649 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # rc=24 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@33 -- # '[' 24 -eq 0 ']' 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@37 -- # echo 'configuring target for authentication with the -m option' 00:12:04.649 configuring target for authentication with the -m option 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@38 -- # config_chap_credentials_for_target -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=2 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 2 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 2 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 1 -eq 1 ']' 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@97 -- # rpc_cmd iscsi_target_node_set_auth -g 2 -r -m iqn.2016-06.io.spdk:disk1 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.649 22:13:36 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 2 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:04.909 executing discovery: 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@39 -- # echo 'executing discovery:' 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@40 -- # rc=0 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@41 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:04.909 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@42 -- # '[' 0 -ne 0 ']' 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@46 -- # echo 'executing login:' 00:12:04.909 executing login: 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@47 -- # rc=0 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@48 -- # iscsiadm -m node -l -p 10.0.0.1:3260 00:12:04.909 Logging in to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:12:04.909 Login to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@49 -- # '[' 0 -ne 0 ']' 00:12:04.909 DONE 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@54 -- # echo DONE 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@55 -- # default_initiator_chap_credentials 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:12:04.909 [2024-07-23 22:13:36.924598] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:04.909 Logging out of session [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:12:04.909 Logout of [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:12:04.909 22:13:36 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:12:04.909 22:13:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:12:04.909 22:13:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:12:04.909 22:13:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:12:04.909 22:13:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:12:04.909 22:13:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:12:08.200 22:13:40 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:12:08.200 22:13:40 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:12:09.135 22:13:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:09.135 22:13:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@57 -- # trap - SIGINT SIGTERM EXIT 00:12:09.135 22:13:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@59 -- # killprocess 81365 00:12:09.135 22:13:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@948 -- # '[' -z 81365 ']' 00:12:09.135 22:13:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@952 -- # kill -0 81365 00:12:09.135 22:13:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@953 -- # uname 00:12:09.135 22:13:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:09.135 22:13:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81365 00:12:09.136 22:13:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:09.136 22:13:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:09.136 killing process with pid 81365 00:12:09.136 22:13:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81365' 00:12:09.136 22:13:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@967 -- # kill 81365 00:12:09.136 22:13:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@972 -- # wait 81365 00:12:09.394 22:13:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@61 -- # iscsitestfini 00:12:09.395 22:13:41 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:09.395 00:12:09.395 real 0m15.333s 00:12:09.395 user 0m15.450s 00:12:09.395 sys 0m0.691s 00:12:09.395 22:13:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.395 22:13:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:12:09.395 ************************************ 00:12:09.395 END TEST chap_mutual_auth 00:12:09.395 ************************************ 00:12:09.395 22:13:41 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@34 -- # run_test iscsi_tgt_reset /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:12:09.395 22:13:41 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:09.395 22:13:41 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.395 22:13:41 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:09.395 ************************************ 00:12:09.395 START TEST iscsi_tgt_reset 00:12:09.395 ************************************ 00:12:09.395 22:13:41 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:12:09.653 * Looking for test storage... 00:12:09.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@11 -- # iscsitestinit 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@18 -- # hash sg_reset 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@22 -- # timing_enter start_iscsi_tgt 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@25 -- # pid=81660 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@26 -- # echo 'Process pid: 81660' 00:12:09.654 Process pid: 81660 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@28 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@30 -- # waitforlisten 81660 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@829 -- # '[' -z 81660 ']' 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:09.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:09.654 22:13:41 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:12:09.654 [2024-07-23 22:13:41.681939] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:12:09.654 [2024-07-23 22:13:41.682030] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81660 ] 00:12:09.654 [2024-07-23 22:13:41.809679] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:09.654 [2024-07-23 22:13:41.832093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.912 [2024-07-23 22:13:41.884216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.478 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:10.478 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@862 -- # return 0 00:12:10.478 22:13:42 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@31 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:12:10.478 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.478 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:10.478 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.478 22:13:42 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@32 -- # rpc_cmd framework_start_init 00:12:10.478 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.478 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:10.736 [2024-07-23 22:13:42.691220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:10.736 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.736 iscsi_tgt is listening. Running tests... 00:12:10.736 22:13:42 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@33 -- # echo 'iscsi_tgt is listening. Running tests...' 00:12:10.736 22:13:42 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@35 -- # timing_exit start_iscsi_tgt 00:12:10.736 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:10.736 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:10.736 22:13:42 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@37 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:12:10.736 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.736 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:10.736 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.737 22:13:42 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@38 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:12:10.737 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.737 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:10.737 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.737 22:13:42 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@39 -- # rpc_cmd bdev_malloc_create 64 512 00:12:10.737 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.737 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:10.737 Malloc0 00:12:10.737 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.737 22:13:42 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@44 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:12:10.737 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.737 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:10.737 22:13:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.737 22:13:42 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@45 -- # sleep 1 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@47 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:12.111 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@48 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:12:12.111 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:12.111 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@49 -- # waitforiscsidevices 1 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@116 -- # local num=1 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:12.111 [2024-07-23 22:13:43.981396] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # n=1 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@123 -- # return 0 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # iscsiadm -m session -P 3 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # grep 'Attached scsi disk' 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # awk '{print $4}' 00:12:12.111 FIO pid: 81722 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # dev=sda 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@54 -- # fiopid=81722 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@55 -- # echo 'FIO pid: 81722' 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 60 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@57 -- # trap 'iscsicleanup; killprocess $pid; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:12:12.111 22:13:43 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:12:12.111 [global] 00:12:12.111 thread=1 00:12:12.111 invalidate=1 00:12:12.111 rw=read 00:12:12.111 time_based=1 00:12:12.112 runtime=60 00:12:12.112 ioengine=libaio 00:12:12.112 direct=1 00:12:12.112 bs=512 00:12:12.112 iodepth=1 00:12:12.112 norandommap=1 00:12:12.112 numjobs=1 00:12:12.112 00:12:12.112 [job0] 00:12:12.112 filename=/dev/sda 00:12:12.112 queue_depth set to 113 (sda) 00:12:12.112 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:12:12.112 fio-3.35 00:12:12.112 Starting 1 thread 00:12:13.048 22:13:44 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 81660 00:12:13.048 22:13:44 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 81722 00:12:13.048 22:13:44 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:12:13.048 [2024-07-23 22:13:45.004405] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:12:13.048 [2024-07-23 22:13:45.004494] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:12:13.048 22:13:45 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:12:13.048 [2024-07-23 22:13:45.005572] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:13.986 22:13:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 81660 00:12:13.986 22:13:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 81722 00:12:13.986 22:13:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:12:13.986 22:13:46 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:12:14.925 22:13:47 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 81660 00:12:14.925 22:13:47 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 81722 00:12:14.925 22:13:47 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:12:14.925 [2024-07-23 22:13:47.019245] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:12:14.925 [2024-07-23 22:13:47.019309] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:12:14.925 22:13:47 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:12:14.925 [2024-07-23 22:13:47.020315] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.863 22:13:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 81660 00:12:15.863 22:13:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 81722 00:12:15.863 22:13:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:12:15.863 22:13:48 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:12:17.269 22:13:49 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 81660 00:12:17.269 22:13:49 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 81722 00:12:17.269 22:13:49 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:12:17.269 [2024-07-23 22:13:49.033971] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:12:17.269 [2024-07-23 22:13:49.034035] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:12:17.269 [2024-07-23 22:13:49.034893] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.269 22:13:49 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 81660 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 81722 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@70 -- # kill 81722 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # wait 81722 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # true 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@73 -- # trap - SIGINT SIGTERM EXIT 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@75 -- # iscsicleanup 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:12:18.207 Cleaning up iSCSI connection 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:12:18.207 fio: io_u error on file /dev/sda: No such device: read offset=60210688, buflen=512 00:12:18.207 fio: pid=81748, err=19/file:io_u.c:1889, func=io_u error, error=No such device 00:12:18.207 Logging out of session [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:18.207 Logout of [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:12:18.207 00:12:18.207 job0: (groupid=0, jobs=1): err=19 (file:io_u.c:1889, func=io_u error, error=No such device): pid=81748: Tue Jul 23 22:13:50 2024 00:12:18.207 read: IOPS=20.4k, BW=9.96MiB/s (10.4MB/s)(57.4MiB/5762msec) 00:12:18.207 slat (usec): min=3, max=1127, avg= 5.28, stdev= 5.66 00:12:18.207 clat (nsec): min=1256, max=1835.1k, avg=43302.83, stdev=13477.48 00:12:18.207 lat (usec): min=39, max=1841, avg=48.57, stdev=14.35 00:12:18.207 clat percentiles (usec): 00:12:18.207 | 1.00th=[ 38], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 42], 00:12:18.207 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 42], 00:12:18.207 | 70.00th=[ 42], 80.00th=[ 43], 90.00th=[ 50], 95.00th=[ 52], 00:12:18.207 | 99.00th=[ 62], 99.50th=[ 70], 99.90th=[ 96], 99.95th=[ 139], 00:12:18.207 | 99.99th=[ 701] 00:12:18.207 bw ( KiB/s): min= 9106, max=10568, per=100.00%, avg=10207.55, stdev=407.34, samples=11 00:12:18.207 iops : min=18212, max=21136, avg=20415.09, stdev=814.68, samples=11 00:12:18.207 lat (usec) : 2=0.01%, 4=0.01%, 20=0.01%, 50=90.86%, 100=9.04% 00:12:18.207 lat (usec) : 250=0.06%, 500=0.02%, 750=0.01%, 1000=0.01% 00:12:18.207 lat (msec) : 2=0.01% 00:12:18.207 cpu : usr=4.55%, sys=16.18%, ctx=118794, majf=0, minf=2 00:12:18.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:18.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.207 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.207 issued rwts: total=117600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:18.207 00:12:18.207 Run status group 0 (all jobs): 00:12:18.207 READ: bw=9.96MiB/s (10.4MB/s), 9.96MiB/s-9.96MiB/s (10.4MB/s-10.4MB/s), io=57.4MiB (60.2MB), run=5762-5762msec 00:12:18.207 00:12:18.207 Disk stats (read/write): 00:12:18.207 sda: ios=115242/0, merge=0/0, ticks=4829/0, in_queue=4829, util=98.36% 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@983 -- # rm -rf 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@76 -- # killprocess 81660 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@948 -- # '[' -z 81660 ']' 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@952 -- # kill -0 81660 00:12:18.207 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@953 -- # uname 00:12:18.208 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:18.208 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81660 00:12:18.208 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:18.208 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:18.208 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81660' 00:12:18.208 killing process with pid 81660 00:12:18.208 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@967 -- # kill 81660 00:12:18.208 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@972 -- # wait 81660 00:12:18.467 22:13:50 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@77 -- # iscsitestfini 00:12:18.467 22:13:50 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:18.467 ************************************ 00:12:18.467 END TEST iscsi_tgt_reset 00:12:18.467 ************************************ 00:12:18.467 00:12:18.467 real 0m8.964s 00:12:18.467 user 0m6.678s 00:12:18.467 sys 0m2.132s 00:12:18.467 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.467 22:13:50 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:12:18.467 22:13:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@35 -- # run_test iscsi_tgt_rpc_config /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:12:18.467 22:13:50 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:18.467 22:13:50 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.467 22:13:50 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:18.467 ************************************ 00:12:18.467 START TEST iscsi_tgt_rpc_config 00:12:18.467 ************************************ 00:12:18.467 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:12:18.467 * Looking for test storage... 00:12:18.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@11 -- # iscsitestinit 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@16 -- # rpc_config_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@18 -- # timing_enter start_iscsi_tgt 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@21 -- # pid=81895 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@22 -- # echo 'Process pid: 81895' 00:12:18.468 Process pid: 81895 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@20 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@24 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:12:18.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@26 -- # waitforlisten 81895 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@829 -- # '[' -z 81895 ']' 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.468 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:12:18.727 [2024-07-23 22:13:50.692793] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:12:18.727 [2024-07-23 22:13:50.692914] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81895 ] 00:12:18.727 [2024-07-23 22:13:50.820934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:18.727 [2024-07-23 22:13:50.836739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.727 [2024-07-23 22:13:50.878761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.727 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:18.727 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@862 -- # return 0 00:12:18.727 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@28 -- # rpc_wait_pid=81904 00:12:18.727 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:12:18.727 22:13:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:12:19.296 22:13:51 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@32 -- # ps 81904 00:12:19.296 PID TTY STAT TIME COMMAND 00:12:19.296 81904 ? S 0:00 python3 /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:12:19.296 22:13:51 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:19.296 [2024-07-23 22:13:51.463499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:19.555 22:13:51 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@35 -- # sleep 1 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@36 -- # echo 'iscsi_tgt is listening. Running tests...' 00:12:20.494 iscsi_tgt is listening. Running tests... 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@39 -- # NOT ps 81904 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@648 -- # local es=0 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # valid_exec_arg ps 81904 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@636 -- # local arg=ps 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # type -t ps 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -P ps 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # arg=/usr/bin/ps 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/ps ]] 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # ps 81904 00:12:20.494 PID TTY STAT TIME COMMAND 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # es=1 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@43 -- # rpc_wait_pid=81928 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@44 -- # sleep 1 00:12:20.494 22:13:52 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:12:21.871 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@45 -- # NOT ps 81928 00:12:21.871 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@648 -- # local es=0 00:12:21.871 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # valid_exec_arg ps 81928 00:12:21.871 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@636 -- # local arg=ps 00:12:21.871 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.871 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # type -t ps 00:12:21.871 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.871 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -P ps 00:12:21.871 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.871 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # arg=/usr/bin/ps 00:12:21.872 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/ps ]] 00:12:21.872 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # ps 81928 00:12:21.872 PID TTY STAT TIME COMMAND 00:12:21.872 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # es=1 00:12:21.872 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:21.872 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:21.872 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:21.872 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@47 -- # timing_exit start_iscsi_tgt 00:12:21.872 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:21.872 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:12:21.872 22:13:53 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@49 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py /home/vagrant/spdk_repo/spdk/scripts/rpc.py 10.0.0.1 10.0.0.2 3260 10.0.0.2/32 spdk_iscsi_ns 00:12:43.814 [2024-07-23 22:14:13.482551] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:44.072 [2024-07-23 22:14:16.081964] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:45.448 verify_log_flag_rpc_methods passed 00:12:45.448 create_malloc_bdevs_rpc_methods passed 00:12:45.448 verify_portal_groups_rpc_methods passed 00:12:45.448 verify_initiator_groups_rpc_method passed. 00:12:45.448 This issue will be fixed later. 00:12:45.448 verify_target_nodes_rpc_methods passed. 00:12:45.448 verify_scsi_devices_rpc_methods passed 00:12:45.448 verify_iscsi_connection_rpc_methods passed 00:12:45.448 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:12:45.708 [ 00:12:45.708 { 00:12:45.708 "name": "Malloc0", 00:12:45.708 "aliases": [ 00:12:45.708 "09d594ca-dbf7-4ccc-abff-7a2d61816c58" 00:12:45.708 ], 00:12:45.708 "product_name": "Malloc disk", 00:12:45.708 "block_size": 512, 00:12:45.708 "num_blocks": 131072, 00:12:45.708 "uuid": "09d594ca-dbf7-4ccc-abff-7a2d61816c58", 00:12:45.708 "assigned_rate_limits": { 00:12:45.708 "rw_ios_per_sec": 0, 00:12:45.708 "rw_mbytes_per_sec": 0, 00:12:45.708 "r_mbytes_per_sec": 0, 00:12:45.708 "w_mbytes_per_sec": 0 00:12:45.708 }, 00:12:45.708 "claimed": false, 00:12:45.708 "zoned": false, 00:12:45.708 "supported_io_types": { 00:12:45.708 "read": true, 00:12:45.708 "write": true, 00:12:45.708 "unmap": true, 00:12:45.708 "flush": true, 00:12:45.708 "reset": true, 00:12:45.708 "nvme_admin": false, 00:12:45.708 "nvme_io": false, 00:12:45.708 "nvme_io_md": false, 00:12:45.708 "write_zeroes": true, 00:12:45.708 "zcopy": true, 00:12:45.708 "get_zone_info": false, 00:12:45.708 "zone_management": false, 00:12:45.708 "zone_append": false, 00:12:45.708 "compare": false, 00:12:45.708 "compare_and_write": false, 00:12:45.708 "abort": true, 00:12:45.708 "seek_hole": false, 00:12:45.708 "seek_data": false, 00:12:45.708 "copy": true, 00:12:45.708 "nvme_iov_md": false 00:12:45.708 }, 00:12:45.708 "memory_domains": [ 00:12:45.708 { 00:12:45.708 "dma_device_id": "system", 00:12:45.708 "dma_device_type": 1 00:12:45.708 }, 00:12:45.708 { 00:12:45.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.708 "dma_device_type": 2 00:12:45.708 } 00:12:45.708 ], 00:12:45.708 "driver_specific": {} 00:12:45.708 }, 00:12:45.708 { 00:12:45.708 "name": "Malloc1", 00:12:45.708 "aliases": [ 00:12:45.708 "2225db5b-b493-4a0e-9aa2-532eb41c471a" 00:12:45.708 ], 00:12:45.708 "product_name": "Malloc disk", 00:12:45.708 "block_size": 512, 00:12:45.708 "num_blocks": 131072, 00:12:45.708 "uuid": "2225db5b-b493-4a0e-9aa2-532eb41c471a", 00:12:45.708 "assigned_rate_limits": { 00:12:45.708 "rw_ios_per_sec": 0, 00:12:45.708 "rw_mbytes_per_sec": 0, 00:12:45.708 "r_mbytes_per_sec": 0, 00:12:45.708 "w_mbytes_per_sec": 0 00:12:45.708 }, 00:12:45.708 "claimed": false, 00:12:45.708 "zoned": false, 00:12:45.708 "supported_io_types": { 00:12:45.708 "read": true, 00:12:45.708 "write": true, 00:12:45.708 "unmap": true, 00:12:45.708 "flush": true, 00:12:45.708 "reset": true, 00:12:45.708 "nvme_admin": false, 00:12:45.708 "nvme_io": false, 00:12:45.708 "nvme_io_md": false, 00:12:45.708 "write_zeroes": true, 00:12:45.708 "zcopy": true, 00:12:45.708 "get_zone_info": false, 00:12:45.708 "zone_management": false, 00:12:45.708 "zone_append": false, 00:12:45.708 "compare": false, 00:12:45.708 "compare_and_write": false, 00:12:45.708 "abort": true, 00:12:45.708 "seek_hole": false, 00:12:45.708 "seek_data": false, 00:12:45.708 "copy": true, 00:12:45.708 "nvme_iov_md": false 00:12:45.708 }, 00:12:45.708 "memory_domains": [ 00:12:45.708 { 00:12:45.708 "dma_device_id": "system", 00:12:45.708 "dma_device_type": 1 00:12:45.708 }, 00:12:45.708 { 00:12:45.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.708 "dma_device_type": 2 00:12:45.708 } 00:12:45.708 ], 00:12:45.708 "driver_specific": {} 00:12:45.708 }, 00:12:45.708 { 00:12:45.708 "name": "Malloc2", 00:12:45.708 "aliases": [ 00:12:45.708 "abfa6b1b-84ea-449c-b29c-dd0387260fc3" 00:12:45.708 ], 00:12:45.708 "product_name": "Malloc disk", 00:12:45.708 "block_size": 512, 00:12:45.708 "num_blocks": 131072, 00:12:45.708 "uuid": "abfa6b1b-84ea-449c-b29c-dd0387260fc3", 00:12:45.708 "assigned_rate_limits": { 00:12:45.708 "rw_ios_per_sec": 0, 00:12:45.708 "rw_mbytes_per_sec": 0, 00:12:45.708 "r_mbytes_per_sec": 0, 00:12:45.708 "w_mbytes_per_sec": 0 00:12:45.708 }, 00:12:45.708 "claimed": false, 00:12:45.708 "zoned": false, 00:12:45.708 "supported_io_types": { 00:12:45.708 "read": true, 00:12:45.708 "write": true, 00:12:45.708 "unmap": true, 00:12:45.708 "flush": true, 00:12:45.708 "reset": true, 00:12:45.708 "nvme_admin": false, 00:12:45.708 "nvme_io": false, 00:12:45.708 "nvme_io_md": false, 00:12:45.708 "write_zeroes": true, 00:12:45.708 "zcopy": true, 00:12:45.708 "get_zone_info": false, 00:12:45.708 "zone_management": false, 00:12:45.708 "zone_append": false, 00:12:45.708 "compare": false, 00:12:45.708 "compare_and_write": false, 00:12:45.708 "abort": true, 00:12:45.708 "seek_hole": false, 00:12:45.708 "seek_data": false, 00:12:45.708 "copy": true, 00:12:45.708 "nvme_iov_md": false 00:12:45.708 }, 00:12:45.708 "memory_domains": [ 00:12:45.708 { 00:12:45.708 "dma_device_id": "system", 00:12:45.708 "dma_device_type": 1 00:12:45.708 }, 00:12:45.708 { 00:12:45.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.708 "dma_device_type": 2 00:12:45.708 } 00:12:45.708 ], 00:12:45.708 "driver_specific": {} 00:12:45.708 }, 00:12:45.708 { 00:12:45.708 "name": "Malloc3", 00:12:45.708 "aliases": [ 00:12:45.708 "64b6a0e6-d117-440f-913b-b37e1806ff6b" 00:12:45.708 ], 00:12:45.708 "product_name": "Malloc disk", 00:12:45.708 "block_size": 512, 00:12:45.708 "num_blocks": 131072, 00:12:45.708 "uuid": "64b6a0e6-d117-440f-913b-b37e1806ff6b", 00:12:45.708 "assigned_rate_limits": { 00:12:45.708 "rw_ios_per_sec": 0, 00:12:45.708 "rw_mbytes_per_sec": 0, 00:12:45.708 "r_mbytes_per_sec": 0, 00:12:45.708 "w_mbytes_per_sec": 0 00:12:45.708 }, 00:12:45.708 "claimed": false, 00:12:45.708 "zoned": false, 00:12:45.708 "supported_io_types": { 00:12:45.708 "read": true, 00:12:45.709 "write": true, 00:12:45.709 "unmap": true, 00:12:45.709 "flush": true, 00:12:45.709 "reset": true, 00:12:45.709 "nvme_admin": false, 00:12:45.709 "nvme_io": false, 00:12:45.709 "nvme_io_md": false, 00:12:45.709 "write_zeroes": true, 00:12:45.709 "zcopy": true, 00:12:45.709 "get_zone_info": false, 00:12:45.709 "zone_management": false, 00:12:45.709 "zone_append": false, 00:12:45.709 "compare": false, 00:12:45.709 "compare_and_write": false, 00:12:45.709 "abort": true, 00:12:45.709 "seek_hole": false, 00:12:45.709 "seek_data": false, 00:12:45.709 "copy": true, 00:12:45.709 "nvme_iov_md": false 00:12:45.709 }, 00:12:45.709 "memory_domains": [ 00:12:45.709 { 00:12:45.709 "dma_device_id": "system", 00:12:45.709 "dma_device_type": 1 00:12:45.709 }, 00:12:45.709 { 00:12:45.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.709 "dma_device_type": 2 00:12:45.709 } 00:12:45.709 ], 00:12:45.709 "driver_specific": {} 00:12:45.709 }, 00:12:45.709 { 00:12:45.709 "name": "Malloc4", 00:12:45.709 "aliases": [ 00:12:45.709 "1412ee11-b4f7-40aa-85a8-30889cdfecb5" 00:12:45.709 ], 00:12:45.709 "product_name": "Malloc disk", 00:12:45.709 "block_size": 512, 00:12:45.709 "num_blocks": 131072, 00:12:45.709 "uuid": "1412ee11-b4f7-40aa-85a8-30889cdfecb5", 00:12:45.709 "assigned_rate_limits": { 00:12:45.709 "rw_ios_per_sec": 0, 00:12:45.709 "rw_mbytes_per_sec": 0, 00:12:45.709 "r_mbytes_per_sec": 0, 00:12:45.709 "w_mbytes_per_sec": 0 00:12:45.709 }, 00:12:45.709 "claimed": false, 00:12:45.709 "zoned": false, 00:12:45.709 "supported_io_types": { 00:12:45.709 "read": true, 00:12:45.709 "write": true, 00:12:45.709 "unmap": true, 00:12:45.709 "flush": true, 00:12:45.709 "reset": true, 00:12:45.709 "nvme_admin": false, 00:12:45.709 "nvme_io": false, 00:12:45.709 "nvme_io_md": false, 00:12:45.709 "write_zeroes": true, 00:12:45.709 "zcopy": true, 00:12:45.709 "get_zone_info": false, 00:12:45.709 "zone_management": false, 00:12:45.709 "zone_append": false, 00:12:45.709 "compare": false, 00:12:45.709 "compare_and_write": false, 00:12:45.709 "abort": true, 00:12:45.709 "seek_hole": false, 00:12:45.709 "seek_data": false, 00:12:45.709 "copy": true, 00:12:45.709 "nvme_iov_md": false 00:12:45.709 }, 00:12:45.709 "memory_domains": [ 00:12:45.709 { 00:12:45.709 "dma_device_id": "system", 00:12:45.709 "dma_device_type": 1 00:12:45.709 }, 00:12:45.709 { 00:12:45.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.709 "dma_device_type": 2 00:12:45.709 } 00:12:45.709 ], 00:12:45.709 "driver_specific": {} 00:12:45.709 }, 00:12:45.709 { 00:12:45.709 "name": "Malloc5", 00:12:45.709 "aliases": [ 00:12:45.709 "2e713b62-025d-44ce-9881-36d4d6bd1231" 00:12:45.709 ], 00:12:45.709 "product_name": "Malloc disk", 00:12:45.709 "block_size": 512, 00:12:45.709 "num_blocks": 131072, 00:12:45.709 "uuid": "2e713b62-025d-44ce-9881-36d4d6bd1231", 00:12:45.709 "assigned_rate_limits": { 00:12:45.709 "rw_ios_per_sec": 0, 00:12:45.709 "rw_mbytes_per_sec": 0, 00:12:45.709 "r_mbytes_per_sec": 0, 00:12:45.709 "w_mbytes_per_sec": 0 00:12:45.709 }, 00:12:45.709 "claimed": false, 00:12:45.709 "zoned": false, 00:12:45.709 "supported_io_types": { 00:12:45.709 "read": true, 00:12:45.709 "write": true, 00:12:45.709 "unmap": true, 00:12:45.709 "flush": true, 00:12:45.709 "reset": true, 00:12:45.709 "nvme_admin": false, 00:12:45.709 "nvme_io": false, 00:12:45.709 "nvme_io_md": false, 00:12:45.709 "write_zeroes": true, 00:12:45.709 "zcopy": true, 00:12:45.709 "get_zone_info": false, 00:12:45.709 "zone_management": false, 00:12:45.709 "zone_append": false, 00:12:45.709 "compare": false, 00:12:45.709 "compare_and_write": false, 00:12:45.709 "abort": true, 00:12:45.709 "seek_hole": false, 00:12:45.709 "seek_data": false, 00:12:45.709 "copy": true, 00:12:45.709 "nvme_iov_md": false 00:12:45.709 }, 00:12:45.709 "memory_domains": [ 00:12:45.709 { 00:12:45.709 "dma_device_id": "system", 00:12:45.709 "dma_device_type": 1 00:12:45.709 }, 00:12:45.709 { 00:12:45.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.709 "dma_device_type": 2 00:12:45.709 } 00:12:45.709 ], 00:12:45.709 "driver_specific": {} 00:12:45.709 } 00:12:45.709 ] 00:12:45.709 Cleaning up iSCSI connection 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@53 -- # trap - SIGINT SIGTERM EXIT 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@55 -- # iscsicleanup 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:12:45.709 iscsiadm: No matching sessions found 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@981 -- # true 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:12:45.709 iscsiadm: No records found 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # true 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@983 -- # rm -rf 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@56 -- # killprocess 81895 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@948 -- # '[' -z 81895 ']' 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@952 -- # kill -0 81895 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@953 -- # uname 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81895 00:12:45.709 killing process with pid 81895 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81895' 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@967 -- # kill 81895 00:12:45.709 22:14:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@972 -- # wait 81895 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@58 -- # iscsitestfini 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:46.278 00:12:46.278 real 0m27.697s 00:12:46.278 user 0m47.098s 00:12:46.278 sys 0m4.562s 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:46.278 ************************************ 00:12:46.278 END TEST iscsi_tgt_rpc_config 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:12:46.278 ************************************ 00:12:46.278 22:14:18 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@36 -- # run_test iscsi_tgt_iscsi_lvol /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:12:46.278 22:14:18 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:46.278 22:14:18 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.278 22:14:18 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:46.278 ************************************ 00:12:46.278 START TEST iscsi_tgt_iscsi_lvol 00:12:46.278 ************************************ 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:12:46.278 * Looking for test storage... 00:12:46.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:46.278 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@11 -- # iscsitestinit 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@13 -- # MALLOC_BDEV_SIZE=128 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@15 -- # '[' 1 -eq 1 ']' 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@16 -- # NUM_LVS=10 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@17 -- # NUM_LVOL=10 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@23 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@24 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@26 -- # timing_enter start_iscsi_tgt 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@29 -- # pid=82433 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@28 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@30 -- # echo 'Process pid: 82433' 00:12:46.279 Process pid: 82433 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@32 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@34 -- # waitforlisten 82433 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@829 -- # '[' -z 82433 ']' 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.279 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:46.279 [2024-07-23 22:14:18.426676] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:12:46.279 [2024-07-23 22:14:18.426747] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82433 ] 00:12:46.538 [2024-07-23 22:14:18.545017] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:46.538 [2024-07-23 22:14:18.560417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.538 [2024-07-23 22:14:18.605340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.538 [2024-07-23 22:14:18.605466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.538 [2024-07-23 22:14:18.605612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.538 [2024-07-23 22:14:18.605613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.538 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.538 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@862 -- # return 0 00:12:46.538 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:12:46.796 22:14:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:47.056 [2024-07-23 22:14:19.027139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:47.056 iscsi_tgt is listening. Running tests... 00:12:47.056 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@37 -- # echo 'iscsi_tgt is listening. Running tests...' 00:12:47.056 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@39 -- # timing_exit start_iscsi_tgt 00:12:47.056 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:47.056 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:47.056 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@41 -- # timing_enter setup 00:12:47.056 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:47.056 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:47.056 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:12:47.315 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # seq 1 10 00:12:47.315 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:12:47.315 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=3 00:12:47.315 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 3 ANY 10.0.0.2/32 00:12:47.574 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 1 -eq 1 ']' 00:12:47.574 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:12:47.833 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # malloc_bdevs='Malloc0 ' 00:12:47.833 22:14:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:12:48.091 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # malloc_bdevs+=Malloc1 00:12:48.091 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:48.091 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@53 -- # bdev=raid0 00:12:48.091 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs_1 -c 1048576 00:12:48.351 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=8b2116e9-4e7d-42a2-8b94-061cd9d01633 00:12:48.351 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:12:48.351 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:12:48.351 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:48.351 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b2116e9-4e7d-42a2-8b94-061cd9d01633 lbd_1 10 00:12:48.610 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c9c82808-2973-4a8b-ac19-7cc507973bf5 00:12:48.610 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c9c82808-2973-4a8b-ac19-7cc507973bf5:0 ' 00:12:48.610 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:48.610 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b2116e9-4e7d-42a2-8b94-061cd9d01633 lbd_2 10 00:12:48.868 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f763dae8-846a-44af-be47-6b682602f5c0 00:12:48.868 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f763dae8-846a-44af-be47-6b682602f5c0:1 ' 00:12:48.868 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:48.868 22:14:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b2116e9-4e7d-42a2-8b94-061cd9d01633 lbd_3 10 00:12:48.868 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f1adf240-2953-431d-b709-9a736f3a300e 00:12:48.868 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f1adf240-2953-431d-b709-9a736f3a300e:2 ' 00:12:48.869 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:48.869 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b2116e9-4e7d-42a2-8b94-061cd9d01633 lbd_4 10 00:12:49.127 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5e0ed3a4-ca64-4064-9450-ce3d5cb0b712 00:12:49.127 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5e0ed3a4-ca64-4064-9450-ce3d5cb0b712:3 ' 00:12:49.127 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:49.127 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b2116e9-4e7d-42a2-8b94-061cd9d01633 lbd_5 10 00:12:49.387 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b6e1b836-420d-41ab-93ec-533a0436c2b3 00:12:49.387 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b6e1b836-420d-41ab-93ec-533a0436c2b3:4 ' 00:12:49.387 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:49.387 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b2116e9-4e7d-42a2-8b94-061cd9d01633 lbd_6 10 00:12:49.387 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4383a17b-6cdc-46b4-ac41-ea6dba91167d 00:12:49.387 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4383a17b-6cdc-46b4-ac41-ea6dba91167d:5 ' 00:12:49.387 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:49.387 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b2116e9-4e7d-42a2-8b94-061cd9d01633 lbd_7 10 00:12:49.646 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=207cea2b-89ec-459a-9c43-54216b0d90c1 00:12:49.646 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='207cea2b-89ec-459a-9c43-54216b0d90c1:6 ' 00:12:49.646 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:49.646 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b2116e9-4e7d-42a2-8b94-061cd9d01633 lbd_8 10 00:12:49.905 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5e197a99-61d1-4eb9-b729-d3349f6edbe7 00:12:49.905 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5e197a99-61d1-4eb9-b729-d3349f6edbe7:7 ' 00:12:49.905 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:49.905 22:14:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b2116e9-4e7d-42a2-8b94-061cd9d01633 lbd_9 10 00:12:50.165 22:14:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d3139d80-75bd-416c-a26c-e8a2a017a39b 00:12:50.165 22:14:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d3139d80-75bd-416c-a26c-e8a2a017a39b:8 ' 00:12:50.165 22:14:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:50.165 22:14:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b2116e9-4e7d-42a2-8b94-061cd9d01633 lbd_10 10 00:12:50.165 22:14:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f86952a5-8282-46af-a018-2decb7699b49 00:12:50.165 22:14:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f86952a5-8282-46af-a018-2decb7699b49:9 ' 00:12:50.165 22:14:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias 'c9c82808-2973-4a8b-ac19-7cc507973bf5:0 f763dae8-846a-44af-be47-6b682602f5c0:1 f1adf240-2953-431d-b709-9a736f3a300e:2 5e0ed3a4-ca64-4064-9450-ce3d5cb0b712:3 b6e1b836-420d-41ab-93ec-533a0436c2b3:4 4383a17b-6cdc-46b4-ac41-ea6dba91167d:5 207cea2b-89ec-459a-9c43-54216b0d90c1:6 5e197a99-61d1-4eb9-b729-d3349f6edbe7:7 d3139d80-75bd-416c-a26c-e8a2a017a39b:8 f86952a5-8282-46af-a018-2decb7699b49:9 ' 1:3 256 -d 00:12:50.425 22:14:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:12:50.425 22:14:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=4 00:12:50.425 22:14:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 4 ANY 10.0.0.2/32 00:12:50.685 22:14:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 2 -eq 1 ']' 00:12:50.685 22:14:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:12:50.945 22:14:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc2 00:12:50.945 22:14:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc2 lvs_2 -c 1048576 00:12:50.945 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=5ef89e4c-d731-42ad-bcaf-f36cc48460f1 00:12:50.945 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:12:50.945 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:12:50.945 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:50.945 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ef89e4c-d731-42ad-bcaf-f36cc48460f1 lbd_1 10 00:12:51.205 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ec3adea9-4786-4227-a469-7effb4ed4b74 00:12:51.205 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ec3adea9-4786-4227-a469-7effb4ed4b74:0 ' 00:12:51.205 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:51.205 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ef89e4c-d731-42ad-bcaf-f36cc48460f1 lbd_2 10 00:12:51.464 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=df5e6ec7-06fe-4724-8037-f63fbd84ed12 00:12:51.464 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='df5e6ec7-06fe-4724-8037-f63fbd84ed12:1 ' 00:12:51.464 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:51.464 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ef89e4c-d731-42ad-bcaf-f36cc48460f1 lbd_3 10 00:12:51.723 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=12acf070-0574-41f1-89d7-df4ac9288018 00:12:51.723 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='12acf070-0574-41f1-89d7-df4ac9288018:2 ' 00:12:51.723 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:51.723 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ef89e4c-d731-42ad-bcaf-f36cc48460f1 lbd_4 10 00:12:51.982 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=37317616-31dd-4ae3-828f-babf9d6139f2 00:12:51.982 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='37317616-31dd-4ae3-828f-babf9d6139f2:3 ' 00:12:51.982 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:51.982 22:14:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ef89e4c-d731-42ad-bcaf-f36cc48460f1 lbd_5 10 00:12:51.982 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a51c0415-2314-41e3-9f25-134fe6004bf2 00:12:51.982 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a51c0415-2314-41e3-9f25-134fe6004bf2:4 ' 00:12:51.982 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:51.982 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ef89e4c-d731-42ad-bcaf-f36cc48460f1 lbd_6 10 00:12:52.241 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=37ea5cf7-1877-4dd8-b666-7becd4c265b9 00:12:52.241 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='37ea5cf7-1877-4dd8-b666-7becd4c265b9:5 ' 00:12:52.241 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:52.241 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ef89e4c-d731-42ad-bcaf-f36cc48460f1 lbd_7 10 00:12:52.501 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b792f93f-9ced-418d-9d17-7c1a2e723d6b 00:12:52.501 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b792f93f-9ced-418d-9d17-7c1a2e723d6b:6 ' 00:12:52.501 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:52.501 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ef89e4c-d731-42ad-bcaf-f36cc48460f1 lbd_8 10 00:12:52.761 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ff28bb2f-d3d1-458b-9079-adda8cdeb8ce 00:12:52.761 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ff28bb2f-d3d1-458b-9079-adda8cdeb8ce:7 ' 00:12:52.761 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:52.761 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ef89e4c-d731-42ad-bcaf-f36cc48460f1 lbd_9 10 00:12:52.761 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9970c884-e2c3-4db7-92e5-0d5dc77a1dde 00:12:52.761 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9970c884-e2c3-4db7-92e5-0d5dc77a1dde:8 ' 00:12:52.761 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:52.761 22:14:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ef89e4c-d731-42ad-bcaf-f36cc48460f1 lbd_10 10 00:12:53.021 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=54bb6f71-63b1-4b40-8f0d-af38a0962afb 00:12:53.021 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='54bb6f71-63b1-4b40-8f0d-af38a0962afb:9 ' 00:12:53.021 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias 'ec3adea9-4786-4227-a469-7effb4ed4b74:0 df5e6ec7-06fe-4724-8037-f63fbd84ed12:1 12acf070-0574-41f1-89d7-df4ac9288018:2 37317616-31dd-4ae3-828f-babf9d6139f2:3 a51c0415-2314-41e3-9f25-134fe6004bf2:4 37ea5cf7-1877-4dd8-b666-7becd4c265b9:5 b792f93f-9ced-418d-9d17-7c1a2e723d6b:6 ff28bb2f-d3d1-458b-9079-adda8cdeb8ce:7 9970c884-e2c3-4db7-92e5-0d5dc77a1dde:8 54bb6f71-63b1-4b40-8f0d-af38a0962afb:9 ' 1:4 256 -d 00:12:53.281 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:12:53.281 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=5 00:12:53.281 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 5 ANY 10.0.0.2/32 00:12:53.541 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 3 -eq 1 ']' 00:12:53.541 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:12:53.541 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc3 00:12:53.541 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc3 lvs_3 -c 1048576 00:12:53.801 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=2842e593-9763-47df-8bbc-b4498a7f5d89 00:12:53.801 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:12:53.801 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:12:53.801 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:53.801 22:14:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2842e593-9763-47df-8bbc-b4498a7f5d89 lbd_1 10 00:12:54.060 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=15531eee-4ba3-43ee-8461-ad97d07785f7 00:12:54.060 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='15531eee-4ba3-43ee-8461-ad97d07785f7:0 ' 00:12:54.060 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:54.060 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2842e593-9763-47df-8bbc-b4498a7f5d89 lbd_2 10 00:12:54.320 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2e75db3e-fc7e-4c14-9b41-9ec703b68ddc 00:12:54.320 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2e75db3e-fc7e-4c14-9b41-9ec703b68ddc:1 ' 00:12:54.320 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:54.320 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2842e593-9763-47df-8bbc-b4498a7f5d89 lbd_3 10 00:12:54.580 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1d6535f9-f290-43e2-b311-10b3753199b5 00:12:54.580 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1d6535f9-f290-43e2-b311-10b3753199b5:2 ' 00:12:54.580 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:54.580 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2842e593-9763-47df-8bbc-b4498a7f5d89 lbd_4 10 00:12:54.580 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f3d7a694-80f9-43ea-a46e-2a2af88b4d81 00:12:54.580 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f3d7a694-80f9-43ea-a46e-2a2af88b4d81:3 ' 00:12:54.580 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:54.580 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2842e593-9763-47df-8bbc-b4498a7f5d89 lbd_5 10 00:12:54.840 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b7e5f7f2-6ec5-4f51-8a88-73f5c0e31616 00:12:54.840 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b7e5f7f2-6ec5-4f51-8a88-73f5c0e31616:4 ' 00:12:54.840 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:54.840 22:14:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2842e593-9763-47df-8bbc-b4498a7f5d89 lbd_6 10 00:12:55.099 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=52140c5f-7adf-4f91-8adb-7e29da61e49d 00:12:55.099 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='52140c5f-7adf-4f91-8adb-7e29da61e49d:5 ' 00:12:55.099 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:55.100 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2842e593-9763-47df-8bbc-b4498a7f5d89 lbd_7 10 00:12:55.360 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d6cd3fc4-d340-4a4a-b906-e4d8aa63571d 00:12:55.360 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d6cd3fc4-d340-4a4a-b906-e4d8aa63571d:6 ' 00:12:55.360 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:55.360 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2842e593-9763-47df-8bbc-b4498a7f5d89 lbd_8 10 00:12:55.360 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e35e2896-2a31-4ed5-93bb-836f45477662 00:12:55.360 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e35e2896-2a31-4ed5-93bb-836f45477662:7 ' 00:12:55.360 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:55.360 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2842e593-9763-47df-8bbc-b4498a7f5d89 lbd_9 10 00:12:55.620 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=21632931-9f53-4646-b547-fd414239723d 00:12:55.620 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='21632931-9f53-4646-b547-fd414239723d:8 ' 00:12:55.620 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:55.620 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2842e593-9763-47df-8bbc-b4498a7f5d89 lbd_10 10 00:12:55.879 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2a37dbd3-cfd0-4ef3-91dc-f9d3399b8067 00:12:55.879 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2a37dbd3-cfd0-4ef3-91dc-f9d3399b8067:9 ' 00:12:55.879 22:14:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias '15531eee-4ba3-43ee-8461-ad97d07785f7:0 2e75db3e-fc7e-4c14-9b41-9ec703b68ddc:1 1d6535f9-f290-43e2-b311-10b3753199b5:2 f3d7a694-80f9-43ea-a46e-2a2af88b4d81:3 b7e5f7f2-6ec5-4f51-8a88-73f5c0e31616:4 52140c5f-7adf-4f91-8adb-7e29da61e49d:5 d6cd3fc4-d340-4a4a-b906-e4d8aa63571d:6 e35e2896-2a31-4ed5-93bb-836f45477662:7 21632931-9f53-4646-b547-fd414239723d:8 2a37dbd3-cfd0-4ef3-91dc-f9d3399b8067:9 ' 1:5 256 -d 00:12:55.880 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:12:55.880 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=6 00:12:55.880 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 6 ANY 10.0.0.2/32 00:12:56.139 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 4 -eq 1 ']' 00:12:56.139 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:12:56.399 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc4 00:12:56.399 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc4 lvs_4 -c 1048576 00:12:56.658 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=381874df-34c3-4fab-8a3f-609d08f0790a 00:12:56.658 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:12:56.658 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:12:56.658 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:56.658 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 381874df-34c3-4fab-8a3f-609d08f0790a lbd_1 10 00:12:56.658 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2b6863b4-65cc-443a-ab44-19dd31bf7464 00:12:56.658 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2b6863b4-65cc-443a-ab44-19dd31bf7464:0 ' 00:12:56.658 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:56.658 22:14:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 381874df-34c3-4fab-8a3f-609d08f0790a lbd_2 10 00:12:56.918 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b2456e79-3dd1-4b64-b507-081149845af7 00:12:56.918 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b2456e79-3dd1-4b64-b507-081149845af7:1 ' 00:12:56.918 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:56.918 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 381874df-34c3-4fab-8a3f-609d08f0790a lbd_3 10 00:12:57.178 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f79097fe-5b21-40b0-b041-91e25ae9be1c 00:12:57.178 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f79097fe-5b21-40b0-b041-91e25ae9be1c:2 ' 00:12:57.178 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:57.178 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 381874df-34c3-4fab-8a3f-609d08f0790a lbd_4 10 00:12:57.178 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=cfb51884-d001-4c59-b9de-e32627bb0907 00:12:57.178 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='cfb51884-d001-4c59-b9de-e32627bb0907:3 ' 00:12:57.178 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:57.178 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 381874df-34c3-4fab-8a3f-609d08f0790a lbd_5 10 00:12:57.438 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=528b8fa8-6cf2-4e84-8b85-096ebab2cdfd 00:12:57.438 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='528b8fa8-6cf2-4e84-8b85-096ebab2cdfd:4 ' 00:12:57.438 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:57.438 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 381874df-34c3-4fab-8a3f-609d08f0790a lbd_6 10 00:12:57.697 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8431cccc-da9c-430c-90c8-069bdb7a5e94 00:12:57.697 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8431cccc-da9c-430c-90c8-069bdb7a5e94:5 ' 00:12:57.697 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:57.697 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 381874df-34c3-4fab-8a3f-609d08f0790a lbd_7 10 00:12:57.957 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a934d457-5b15-46a9-bea0-53f717feb867 00:12:57.957 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a934d457-5b15-46a9-bea0-53f717feb867:6 ' 00:12:57.957 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:57.957 22:14:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 381874df-34c3-4fab-8a3f-609d08f0790a lbd_8 10 00:12:57.957 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=00bc6827-06c0-4e00-a737-defa96660fca 00:12:57.957 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='00bc6827-06c0-4e00-a737-defa96660fca:7 ' 00:12:57.957 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:57.957 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 381874df-34c3-4fab-8a3f-609d08f0790a lbd_9 10 00:12:58.216 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=71e751e9-f35d-4e6c-be84-874ce817a497 00:12:58.216 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='71e751e9-f35d-4e6c-be84-874ce817a497:8 ' 00:12:58.216 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:58.216 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 381874df-34c3-4fab-8a3f-609d08f0790a lbd_10 10 00:12:58.477 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=101f104c-d2ab-4377-ad41-8441c9d33494 00:12:58.477 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='101f104c-d2ab-4377-ad41-8441c9d33494:9 ' 00:12:58.478 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias '2b6863b4-65cc-443a-ab44-19dd31bf7464:0 b2456e79-3dd1-4b64-b507-081149845af7:1 f79097fe-5b21-40b0-b041-91e25ae9be1c:2 cfb51884-d001-4c59-b9de-e32627bb0907:3 528b8fa8-6cf2-4e84-8b85-096ebab2cdfd:4 8431cccc-da9c-430c-90c8-069bdb7a5e94:5 a934d457-5b15-46a9-bea0-53f717feb867:6 00bc6827-06c0-4e00-a737-defa96660fca:7 71e751e9-f35d-4e6c-be84-874ce817a497:8 101f104c-d2ab-4377-ad41-8441c9d33494:9 ' 1:6 256 -d 00:12:58.737 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:12:58.737 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=7 00:12:58.737 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 7 ANY 10.0.0.2/32 00:12:58.737 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 5 -eq 1 ']' 00:12:58.737 22:14:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:12:58.995 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc5 00:12:58.995 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc5 lvs_5 -c 1048576 00:12:59.253 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=6c7d289b-7411-443d-bf0c-8833c7e8770c 00:12:59.253 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:12:59.253 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:12:59.253 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:59.253 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6c7d289b-7411-443d-bf0c-8833c7e8770c lbd_1 10 00:12:59.253 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0176ed0b-6fb1-44c1-a013-f22d9ffda4b9 00:12:59.253 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0176ed0b-6fb1-44c1-a013-f22d9ffda4b9:0 ' 00:12:59.253 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:59.253 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6c7d289b-7411-443d-bf0c-8833c7e8770c lbd_2 10 00:12:59.511 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d09bd960-1ca5-4ea5-b58a-6fb8fea0ed11 00:12:59.511 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d09bd960-1ca5-4ea5-b58a-6fb8fea0ed11:1 ' 00:12:59.511 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:59.511 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6c7d289b-7411-443d-bf0c-8833c7e8770c lbd_3 10 00:12:59.770 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b824e0e7-add9-44c7-9f6f-b1ec3cf7dcdd 00:12:59.770 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b824e0e7-add9-44c7-9f6f-b1ec3cf7dcdd:2 ' 00:12:59.770 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:59.770 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6c7d289b-7411-443d-bf0c-8833c7e8770c lbd_4 10 00:13:00.029 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=17c8dcf7-2dc9-472d-8602-dc03cd5962dd 00:13:00.029 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='17c8dcf7-2dc9-472d-8602-dc03cd5962dd:3 ' 00:13:00.029 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:00.029 22:14:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6c7d289b-7411-443d-bf0c-8833c7e8770c lbd_5 10 00:13:00.029 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0f2251cd-0c84-40f9-bf93-a95df5b9cecb 00:13:00.029 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0f2251cd-0c84-40f9-bf93-a95df5b9cecb:4 ' 00:13:00.029 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:00.029 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6c7d289b-7411-443d-bf0c-8833c7e8770c lbd_6 10 00:13:00.288 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c5b91b7c-9ab7-465a-acfe-86ad70021ed1 00:13:00.288 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c5b91b7c-9ab7-465a-acfe-86ad70021ed1:5 ' 00:13:00.288 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:00.288 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6c7d289b-7411-443d-bf0c-8833c7e8770c lbd_7 10 00:13:00.547 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=10d1eacb-7b3c-48ef-b7a1-02dbe7f1a470 00:13:00.547 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='10d1eacb-7b3c-48ef-b7a1-02dbe7f1a470:6 ' 00:13:00.547 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:00.547 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6c7d289b-7411-443d-bf0c-8833c7e8770c lbd_8 10 00:13:00.806 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2fdb8978-98c5-45bb-b3f8-970f5b0081e1 00:13:00.806 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2fdb8978-98c5-45bb-b3f8-970f5b0081e1:7 ' 00:13:00.806 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:00.806 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6c7d289b-7411-443d-bf0c-8833c7e8770c lbd_9 10 00:13:00.806 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=98f58649-32c8-442b-8265-5ccf5d62f319 00:13:00.806 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='98f58649-32c8-442b-8265-5ccf5d62f319:8 ' 00:13:00.806 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:00.806 22:14:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6c7d289b-7411-443d-bf0c-8833c7e8770c lbd_10 10 00:13:01.066 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=54764210-005a-4ebb-97ac-36990db94969 00:13:01.066 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='54764210-005a-4ebb-97ac-36990db94969:9 ' 00:13:01.066 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias '0176ed0b-6fb1-44c1-a013-f22d9ffda4b9:0 d09bd960-1ca5-4ea5-b58a-6fb8fea0ed11:1 b824e0e7-add9-44c7-9f6f-b1ec3cf7dcdd:2 17c8dcf7-2dc9-472d-8602-dc03cd5962dd:3 0f2251cd-0c84-40f9-bf93-a95df5b9cecb:4 c5b91b7c-9ab7-465a-acfe-86ad70021ed1:5 10d1eacb-7b3c-48ef-b7a1-02dbe7f1a470:6 2fdb8978-98c5-45bb-b3f8-970f5b0081e1:7 98f58649-32c8-442b-8265-5ccf5d62f319:8 54764210-005a-4ebb-97ac-36990db94969:9 ' 1:7 256 -d 00:13:01.325 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:01.325 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=8 00:13:01.325 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 8 ANY 10.0.0.2/32 00:13:01.325 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 6 -eq 1 ']' 00:13:01.325 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:01.583 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc6 00:13:01.584 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc6 lvs_6 -c 1048576 00:13:01.842 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=2d254c7c-8a93-462e-89d7-03c19911ce47 00:13:01.842 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:01.842 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:01.842 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:01.842 22:14:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2d254c7c-8a93-462e-89d7-03c19911ce47 lbd_1 10 00:13:02.101 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c8426f07-dd15-46f7-b250-7e1d4060665c 00:13:02.101 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c8426f07-dd15-46f7-b250-7e1d4060665c:0 ' 00:13:02.101 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:02.101 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2d254c7c-8a93-462e-89d7-03c19911ce47 lbd_2 10 00:13:02.101 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=588587ad-4a0a-4246-8546-53628e20c2d8 00:13:02.101 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='588587ad-4a0a-4246-8546-53628e20c2d8:1 ' 00:13:02.101 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:02.101 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2d254c7c-8a93-462e-89d7-03c19911ce47 lbd_3 10 00:13:02.360 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=88a781fc-13c2-4123-a782-509095e26775 00:13:02.360 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='88a781fc-13c2-4123-a782-509095e26775:2 ' 00:13:02.360 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:02.360 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2d254c7c-8a93-462e-89d7-03c19911ce47 lbd_4 10 00:13:02.620 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c587b8e5-9093-428f-b997-da0f1305fb47 00:13:02.620 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c587b8e5-9093-428f-b997-da0f1305fb47:3 ' 00:13:02.620 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:02.620 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2d254c7c-8a93-462e-89d7-03c19911ce47 lbd_5 10 00:13:02.620 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2f3cee72-4f4f-4494-a1ba-3910291f2242 00:13:02.620 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2f3cee72-4f4f-4494-a1ba-3910291f2242:4 ' 00:13:02.620 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:02.620 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2d254c7c-8a93-462e-89d7-03c19911ce47 lbd_6 10 00:13:02.879 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=abee9565-a719-4f3b-b559-eb7b3038f11d 00:13:02.879 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='abee9565-a719-4f3b-b559-eb7b3038f11d:5 ' 00:13:02.879 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:02.879 22:14:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2d254c7c-8a93-462e-89d7-03c19911ce47 lbd_7 10 00:13:03.138 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=fe87872c-ea76-42fb-b12b-785c2c7811a7 00:13:03.138 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='fe87872c-ea76-42fb-b12b-785c2c7811a7:6 ' 00:13:03.138 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:03.138 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2d254c7c-8a93-462e-89d7-03c19911ce47 lbd_8 10 00:13:03.138 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=10573e22-676a-433b-a56f-be3af843b34f 00:13:03.138 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='10573e22-676a-433b-a56f-be3af843b34f:7 ' 00:13:03.138 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:03.138 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2d254c7c-8a93-462e-89d7-03c19911ce47 lbd_9 10 00:13:03.398 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7d77e82e-22b5-4b96-ae99-6974bb6950f3 00:13:03.398 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7d77e82e-22b5-4b96-ae99-6974bb6950f3:8 ' 00:13:03.398 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:03.398 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2d254c7c-8a93-462e-89d7-03c19911ce47 lbd_10 10 00:13:03.657 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=6f96ccd0-aa93-4d1d-8524-75d27010b419 00:13:03.657 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='6f96ccd0-aa93-4d1d-8524-75d27010b419:9 ' 00:13:03.657 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias 'c8426f07-dd15-46f7-b250-7e1d4060665c:0 588587ad-4a0a-4246-8546-53628e20c2d8:1 88a781fc-13c2-4123-a782-509095e26775:2 c587b8e5-9093-428f-b997-da0f1305fb47:3 2f3cee72-4f4f-4494-a1ba-3910291f2242:4 abee9565-a719-4f3b-b559-eb7b3038f11d:5 fe87872c-ea76-42fb-b12b-785c2c7811a7:6 10573e22-676a-433b-a56f-be3af843b34f:7 7d77e82e-22b5-4b96-ae99-6974bb6950f3:8 6f96ccd0-aa93-4d1d-8524-75d27010b419:9 ' 1:8 256 -d 00:13:03.657 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:03.657 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=9 00:13:03.657 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 9 ANY 10.0.0.2/32 00:13:03.916 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 7 -eq 1 ']' 00:13:03.916 22:14:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:04.176 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc7 00:13:04.176 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc7 lvs_7 -c 1048576 00:13:04.434 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=7092fa1b-4307-45b1-9f20-e54e9a6faad4 00:13:04.434 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:04.434 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:04.434 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:04.434 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7092fa1b-4307-45b1-9f20-e54e9a6faad4 lbd_1 10 00:13:04.434 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=75b5cdd5-02da-4ab2-93d9-88bc25fd3123 00:13:04.434 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='75b5cdd5-02da-4ab2-93d9-88bc25fd3123:0 ' 00:13:04.434 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:04.434 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7092fa1b-4307-45b1-9f20-e54e9a6faad4 lbd_2 10 00:13:04.693 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d1891716-dd27-437e-aa83-6e86fb4e1a7f 00:13:04.693 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d1891716-dd27-437e-aa83-6e86fb4e1a7f:1 ' 00:13:04.693 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:04.693 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7092fa1b-4307-45b1-9f20-e54e9a6faad4 lbd_3 10 00:13:04.997 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=025162e3-2caf-4b08-8cc1-20d02621c36c 00:13:04.997 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='025162e3-2caf-4b08-8cc1-20d02621c36c:2 ' 00:13:04.997 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:04.997 22:14:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7092fa1b-4307-45b1-9f20-e54e9a6faad4 lbd_4 10 00:13:04.997 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d0dd9eb2-fc20-4abb-86c7-f09921657bda 00:13:04.997 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d0dd9eb2-fc20-4abb-86c7-f09921657bda:3 ' 00:13:04.997 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:04.997 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7092fa1b-4307-45b1-9f20-e54e9a6faad4 lbd_5 10 00:13:05.276 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=04aca5e2-d53c-48e3-9b0a-861aed324982 00:13:05.277 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='04aca5e2-d53c-48e3-9b0a-861aed324982:4 ' 00:13:05.277 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:05.277 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7092fa1b-4307-45b1-9f20-e54e9a6faad4 lbd_6 10 00:13:05.277 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2d4a4eef-cd03-46e7-9233-3e7d9ead5614 00:13:05.277 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2d4a4eef-cd03-46e7-9233-3e7d9ead5614:5 ' 00:13:05.277 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:05.277 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7092fa1b-4307-45b1-9f20-e54e9a6faad4 lbd_7 10 00:13:05.536 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ffb4012f-c5d6-44f4-838f-34108fda4fcc 00:13:05.536 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ffb4012f-c5d6-44f4-838f-34108fda4fcc:6 ' 00:13:05.536 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:05.536 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7092fa1b-4307-45b1-9f20-e54e9a6faad4 lbd_8 10 00:13:05.795 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3d82cd25-8e71-4363-baf5-2a8885a7046a 00:13:05.795 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3d82cd25-8e71-4363-baf5-2a8885a7046a:7 ' 00:13:05.795 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:05.795 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7092fa1b-4307-45b1-9f20-e54e9a6faad4 lbd_9 10 00:13:06.055 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=289824d2-fd9d-4fdb-acf8-60575f8ab5c4 00:13:06.055 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='289824d2-fd9d-4fdb-acf8-60575f8ab5c4:8 ' 00:13:06.055 22:14:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:06.055 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7092fa1b-4307-45b1-9f20-e54e9a6faad4 lbd_10 10 00:13:06.055 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d1fa3dd0-8f82-4afe-ba88-eccc5ccc0b2b 00:13:06.055 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d1fa3dd0-8f82-4afe-ba88-eccc5ccc0b2b:9 ' 00:13:06.055 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias '75b5cdd5-02da-4ab2-93d9-88bc25fd3123:0 d1891716-dd27-437e-aa83-6e86fb4e1a7f:1 025162e3-2caf-4b08-8cc1-20d02621c36c:2 d0dd9eb2-fc20-4abb-86c7-f09921657bda:3 04aca5e2-d53c-48e3-9b0a-861aed324982:4 2d4a4eef-cd03-46e7-9233-3e7d9ead5614:5 ffb4012f-c5d6-44f4-838f-34108fda4fcc:6 3d82cd25-8e71-4363-baf5-2a8885a7046a:7 289824d2-fd9d-4fdb-acf8-60575f8ab5c4:8 d1fa3dd0-8f82-4afe-ba88-eccc5ccc0b2b:9 ' 1:9 256 -d 00:13:06.314 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:06.314 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=10 00:13:06.314 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 10 ANY 10.0.0.2/32 00:13:06.573 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 8 -eq 1 ']' 00:13:06.573 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:06.573 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc8 00:13:06.573 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc8 lvs_8 -c 1048576 00:13:06.832 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=573ba260-1eae-4826-b788-4c9da9e61b83 00:13:06.832 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:06.832 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:06.832 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:06.832 22:14:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 573ba260-1eae-4826-b788-4c9da9e61b83 lbd_1 10 00:13:07.091 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=07de3d5d-3aa4-44cb-b7ee-d217f3f6ccb0 00:13:07.091 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='07de3d5d-3aa4-44cb-b7ee-d217f3f6ccb0:0 ' 00:13:07.091 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:07.091 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 573ba260-1eae-4826-b788-4c9da9e61b83 lbd_2 10 00:13:07.091 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b701b128-f145-4cba-ba0e-d9a50dd89e8e 00:13:07.091 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b701b128-f145-4cba-ba0e-d9a50dd89e8e:1 ' 00:13:07.091 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:07.091 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 573ba260-1eae-4826-b788-4c9da9e61b83 lbd_3 10 00:13:07.350 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=372c38f4-166a-4d71-81ff-ca2d0e1d0220 00:13:07.350 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='372c38f4-166a-4d71-81ff-ca2d0e1d0220:2 ' 00:13:07.350 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:07.350 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 573ba260-1eae-4826-b788-4c9da9e61b83 lbd_4 10 00:13:07.609 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7c2f2f47-84e3-498e-a4e0-a6879f75f176 00:13:07.609 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7c2f2f47-84e3-498e-a4e0-a6879f75f176:3 ' 00:13:07.609 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:07.609 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 573ba260-1eae-4826-b788-4c9da9e61b83 lbd_5 10 00:13:07.869 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2454975d-7fc4-45d5-a66b-4afdd2767b03 00:13:07.869 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2454975d-7fc4-45d5-a66b-4afdd2767b03:4 ' 00:13:07.869 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:07.869 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 573ba260-1eae-4826-b788-4c9da9e61b83 lbd_6 10 00:13:07.869 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c593bb3c-bf47-4e1c-b5d8-77a82c03fd23 00:13:07.869 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c593bb3c-bf47-4e1c-b5d8-77a82c03fd23:5 ' 00:13:07.869 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:07.869 22:14:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 573ba260-1eae-4826-b788-4c9da9e61b83 lbd_7 10 00:13:08.128 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=334bdb26-2af8-497f-80fd-ba60bae7f209 00:13:08.128 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='334bdb26-2af8-497f-80fd-ba60bae7f209:6 ' 00:13:08.128 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:08.128 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 573ba260-1eae-4826-b788-4c9da9e61b83 lbd_8 10 00:13:08.387 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2a365645-f1ab-4f4d-a5bd-dcc56fd28043 00:13:08.387 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2a365645-f1ab-4f4d-a5bd-dcc56fd28043:7 ' 00:13:08.387 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:08.387 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 573ba260-1eae-4826-b788-4c9da9e61b83 lbd_9 10 00:13:08.387 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=6ac6b1aa-99d3-4ca3-ace5-5c701a99f9c2 00:13:08.387 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='6ac6b1aa-99d3-4ca3-ace5-5c701a99f9c2:8 ' 00:13:08.387 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:08.387 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 573ba260-1eae-4826-b788-4c9da9e61b83 lbd_10 10 00:13:08.647 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0ae10345-fe81-4e11-8fde-3d25166e0ca0 00:13:08.647 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0ae10345-fe81-4e11-8fde-3d25166e0ca0:9 ' 00:13:08.647 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias '07de3d5d-3aa4-44cb-b7ee-d217f3f6ccb0:0 b701b128-f145-4cba-ba0e-d9a50dd89e8e:1 372c38f4-166a-4d71-81ff-ca2d0e1d0220:2 7c2f2f47-84e3-498e-a4e0-a6879f75f176:3 2454975d-7fc4-45d5-a66b-4afdd2767b03:4 c593bb3c-bf47-4e1c-b5d8-77a82c03fd23:5 334bdb26-2af8-497f-80fd-ba60bae7f209:6 2a365645-f1ab-4f4d-a5bd-dcc56fd28043:7 6ac6b1aa-99d3-4ca3-ace5-5c701a99f9c2:8 0ae10345-fe81-4e11-8fde-3d25166e0ca0:9 ' 1:10 256 -d 00:13:08.906 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:08.906 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=11 00:13:08.906 22:14:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 11 ANY 10.0.0.2/32 00:13:08.906 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 9 -eq 1 ']' 00:13:08.906 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:09.166 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc9 00:13:09.166 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc9 lvs_9 -c 1048576 00:13:09.426 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=bd7ecd6c-c511-446f-a0c2-7dee9c9aab37 00:13:09.426 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:09.426 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:09.426 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:09.426 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bd7ecd6c-c511-446f-a0c2-7dee9c9aab37 lbd_1 10 00:13:09.426 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b1598e79-2fa9-402a-a9f6-d36f7178d812 00:13:09.426 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b1598e79-2fa9-402a-a9f6-d36f7178d812:0 ' 00:13:09.426 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:09.426 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bd7ecd6c-c511-446f-a0c2-7dee9c9aab37 lbd_2 10 00:13:09.685 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=eac83739-a86b-4cb8-8c70-8ab8e77411f9 00:13:09.685 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='eac83739-a86b-4cb8-8c70-8ab8e77411f9:1 ' 00:13:09.685 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:09.685 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bd7ecd6c-c511-446f-a0c2-7dee9c9aab37 lbd_3 10 00:13:09.945 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=327f8e23-f7df-4ded-b5ec-cfaa733925f8 00:13:09.945 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='327f8e23-f7df-4ded-b5ec-cfaa733925f8:2 ' 00:13:09.945 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:09.945 22:14:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bd7ecd6c-c511-446f-a0c2-7dee9c9aab37 lbd_4 10 00:13:10.204 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2ed877c6-0bb5-4211-824a-83db12522e8c 00:13:10.204 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2ed877c6-0bb5-4211-824a-83db12522e8c:3 ' 00:13:10.204 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:10.204 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bd7ecd6c-c511-446f-a0c2-7dee9c9aab37 lbd_5 10 00:13:10.204 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f002e2dd-f243-459a-8ee0-d001ec8cca5e 00:13:10.204 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f002e2dd-f243-459a-8ee0-d001ec8cca5e:4 ' 00:13:10.204 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:10.204 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bd7ecd6c-c511-446f-a0c2-7dee9c9aab37 lbd_6 10 00:13:10.464 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ed0e3470-ae9b-43e9-bad1-316460198a0e 00:13:10.464 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ed0e3470-ae9b-43e9-bad1-316460198a0e:5 ' 00:13:10.464 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:10.464 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bd7ecd6c-c511-446f-a0c2-7dee9c9aab37 lbd_7 10 00:13:10.464 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=70cd5b99-61c6-4cd3-bae9-ad275c4706ea 00:13:10.724 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='70cd5b99-61c6-4cd3-bae9-ad275c4706ea:6 ' 00:13:10.724 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:10.724 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bd7ecd6c-c511-446f-a0c2-7dee9c9aab37 lbd_8 10 00:13:10.724 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=08b4f19d-717e-43f4-baba-66bd375fa25e 00:13:10.724 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='08b4f19d-717e-43f4-baba-66bd375fa25e:7 ' 00:13:10.724 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:10.724 22:14:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bd7ecd6c-c511-446f-a0c2-7dee9c9aab37 lbd_9 10 00:13:10.983 22:14:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c8c7b07c-c0d3-467d-8f96-baabfd30b8ba 00:13:10.983 22:14:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c8c7b07c-c0d3-467d-8f96-baabfd30b8ba:8 ' 00:13:10.983 22:14:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:10.983 22:14:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bd7ecd6c-c511-446f-a0c2-7dee9c9aab37 lbd_10 10 00:13:11.243 22:14:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=88eb2f7b-3fc6-478f-9038-50a4c5b56714 00:13:11.243 22:14:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='88eb2f7b-3fc6-478f-9038-50a4c5b56714:9 ' 00:13:11.243 22:14:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias 'b1598e79-2fa9-402a-a9f6-d36f7178d812:0 eac83739-a86b-4cb8-8c70-8ab8e77411f9:1 327f8e23-f7df-4ded-b5ec-cfaa733925f8:2 2ed877c6-0bb5-4211-824a-83db12522e8c:3 f002e2dd-f243-459a-8ee0-d001ec8cca5e:4 ed0e3470-ae9b-43e9-bad1-316460198a0e:5 70cd5b99-61c6-4cd3-bae9-ad275c4706ea:6 08b4f19d-717e-43f4-baba-66bd375fa25e:7 c8c7b07c-c0d3-467d-8f96-baabfd30b8ba:8 88eb2f7b-3fc6-478f-9038-50a4c5b56714:9 ' 1:11 256 -d 00:13:11.243 22:14:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:13:11.243 22:14:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=12 00:13:11.243 22:14:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 12 ANY 10.0.0.2/32 00:13:11.502 22:14:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 10 -eq 1 ']' 00:13:11.502 22:14:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:13:11.761 22:14:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc10 00:13:11.761 22:14:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc10 lvs_10 -c 1048576 00:13:12.020 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=bc60476c-82cc-42ba-8cde-61f1f5730d01 00:13:12.020 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:13:12.020 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:13:12.020 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:12.020 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bc60476c-82cc-42ba-8cde-61f1f5730d01 lbd_1 10 00:13:12.279 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1674ce7c-d139-4be8-acbc-54674c748f8b 00:13:12.279 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1674ce7c-d139-4be8-acbc-54674c748f8b:0 ' 00:13:12.279 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:12.279 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bc60476c-82cc-42ba-8cde-61f1f5730d01 lbd_2 10 00:13:12.537 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7b0937a4-7811-480a-a97d-1986102c2b1f 00:13:12.537 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7b0937a4-7811-480a-a97d-1986102c2b1f:1 ' 00:13:12.537 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:12.537 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bc60476c-82cc-42ba-8cde-61f1f5730d01 lbd_3 10 00:13:12.537 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0dba28bf-b212-49a0-b3e7-c2bcc7771d63 00:13:12.537 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0dba28bf-b212-49a0-b3e7-c2bcc7771d63:2 ' 00:13:12.537 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:12.537 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bc60476c-82cc-42ba-8cde-61f1f5730d01 lbd_4 10 00:13:12.796 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=319d83ce-5d9b-4919-ac5d-e74b04ecdae1 00:13:12.796 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='319d83ce-5d9b-4919-ac5d-e74b04ecdae1:3 ' 00:13:12.796 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:12.796 22:14:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bc60476c-82cc-42ba-8cde-61f1f5730d01 lbd_5 10 00:13:13.055 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=75a9bcbd-78ae-4a0a-b41f-a2449e5c65ab 00:13:13.055 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='75a9bcbd-78ae-4a0a-b41f-a2449e5c65ab:4 ' 00:13:13.055 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:13.055 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bc60476c-82cc-42ba-8cde-61f1f5730d01 lbd_6 10 00:13:13.055 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=df415102-9ad6-44f3-9bbd-d32abc2915ae 00:13:13.055 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='df415102-9ad6-44f3-9bbd-d32abc2915ae:5 ' 00:13:13.055 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:13.055 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bc60476c-82cc-42ba-8cde-61f1f5730d01 lbd_7 10 00:13:13.316 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=27b68c8c-94ef-4af7-9fd0-9bb227a3cd09 00:13:13.316 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='27b68c8c-94ef-4af7-9fd0-9bb227a3cd09:6 ' 00:13:13.316 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:13.316 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bc60476c-82cc-42ba-8cde-61f1f5730d01 lbd_8 10 00:13:13.595 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8d6f40ca-2a2f-4804-b4f0-fae8ead51553 00:13:13.595 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8d6f40ca-2a2f-4804-b4f0-fae8ead51553:7 ' 00:13:13.595 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:13.595 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bc60476c-82cc-42ba-8cde-61f1f5730d01 lbd_9 10 00:13:13.595 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=411ee439-01ab-4b3c-b23f-c36ea5768095 00:13:13.595 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='411ee439-01ab-4b3c-b23f-c36ea5768095:8 ' 00:13:13.595 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:13:13.595 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bc60476c-82cc-42ba-8cde-61f1f5730d01 lbd_10 10 00:13:13.863 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8e54302a-ca01-4ced-bf7f-03bd626f9874 00:13:13.863 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8e54302a-ca01-4ced-bf7f-03bd626f9874:9 ' 00:13:13.863 22:14:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias '1674ce7c-d139-4be8-acbc-54674c748f8b:0 7b0937a4-7811-480a-a97d-1986102c2b1f:1 0dba28bf-b212-49a0-b3e7-c2bcc7771d63:2 319d83ce-5d9b-4919-ac5d-e74b04ecdae1:3 75a9bcbd-78ae-4a0a-b41f-a2449e5c65ab:4 df415102-9ad6-44f3-9bbd-d32abc2915ae:5 27b68c8c-94ef-4af7-9fd0-9bb227a3cd09:6 8d6f40ca-2a2f-4804-b4f0-fae8ead51553:7 411ee439-01ab-4b3c-b23f-c36ea5768095:8 8e54302a-ca01-4ced-bf7f-03bd626f9874:9 ' 1:12 256 -d 00:13:14.123 22:14:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@66 -- # timing_exit setup 00:13:14.123 22:14:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:14.123 22:14:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:14.123 22:14:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@68 -- # sleep 1 00:13:15.061 22:14:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@70 -- # timing_enter discovery 00:13:15.061 22:14:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:15.061 22:14:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:15.061 22:14:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@71 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:13:15.061 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:13:15.061 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:13:15.061 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:13:15.061 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:13:15.061 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:13:15.061 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:13:15.061 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:13:15.061 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:13:15.061 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:13:15.061 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:13:15.061 22:14:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@72 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:13:15.061 [2024-07-23 22:14:47.208220] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.061 [2024-07-23 22:14:47.208753] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.061 [2024-07-23 22:14:47.231741] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.256937] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.278220] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.290109] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.297838] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.333632] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.334029] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.334056] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.368413] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.377342] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.418470] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.431116] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.435216] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.453949] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.457942] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.463772] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.476772] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.500586] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.500673] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.513384] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.321 [2024-07-23 22:14:47.515791] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.528090] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.546611] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.557986] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.595779] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.598473] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.614031] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.624366] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.624465] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.631648] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.647047] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.652206] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.686377] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.688948] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.689432] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.732480] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.735929] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.746747] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.581 [2024-07-23 22:14:47.755052] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.778181] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.790392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.798180] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.808808] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.813020] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.813159] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.826450] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.845551] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.858966] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.868831] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.873996] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.875823] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.898561] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.902736] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.906403] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.976691] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.978263] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.983309] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.983346] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.983353] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.987448] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:47.988126] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:15.841 [2024-07-23 22:14:48.032638] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.044321] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.070908] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.078614] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.083734] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.150523] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.173491] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.173996] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.181686] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.182841] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.201508] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.227873] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.229452] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.238544] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.261877] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.101 [2024-07-23 22:14:48.295745] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.360 [2024-07-23 22:14:48.304494] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.620 [2024-07-23 22:14:48.590587] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.620 [2024-07-23 22:14:48.612444] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.620 [2024-07-23 22:14:48.625749] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.620 [2024-07-23 22:14:48.636679] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.620 [2024-07-23 22:14:48.677420] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.620 [2024-07-23 22:14:48.677536] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.620 [2024-07-23 22:14:48.689507] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.620 [2024-07-23 22:14:48.692046] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.620 [2024-07-23 22:14:48.692150] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.620 [2024-07-23 22:14:48.717261] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.620 [2024-07-23 22:14:48.724868] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.620 [2024-07-23 22:14:48.787064] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.620 [2024-07-23 22:14:48.790059] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.620 [2024-07-23 22:14:48.799256] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.879 [2024-07-23 22:14:48.830092] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.879 [2024-07-23 22:14:48.830132] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.879 [2024-07-23 22:14:48.851558] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.879 [2024-07-23 22:14:48.857710] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.879 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:13:16.879 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:13:16.879 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:13:16.879 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:13:16.880 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:13:16.880 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:13:16.880 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:13:16.880 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:13:16.880 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:13:16.880 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:13:16.880 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:13:16.880 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:13:16.880 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:13:16.880 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:13:16.880 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:13:16.880 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:13:16.880 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:13:16.880 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:13:16.880 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:13:16.880 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:13:16.880 22:14:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@73 -- # waitforiscsidevices 100 00:13:16.880 22:14:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@116 -- # local num=100 00:13:16.880 22:14:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:13:16.880 22:14:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:13:16.880 22:14:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:13:16.880 22:14:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:13:16.880 [2024-07-23 22:14:48.902249] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.880 [2024-07-23 22:14:48.908570] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:16.880 22:14:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # n=100 00:13:16.880 22:14:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@120 -- # '[' 100 -ne 100 ']' 00:13:16.880 22:14:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@123 -- # return 0 00:13:16.880 22:14:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@74 -- # timing_exit discovery 00:13:16.880 22:14:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:16.880 22:14:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:16.880 22:14:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@76 -- # timing_enter fio 00:13:16.880 22:14:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:16.880 22:14:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:16.880 22:14:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 8 -t randwrite -r 10 -v 00:13:17.139 [global] 00:13:17.139 thread=1 00:13:17.139 invalidate=1 00:13:17.139 rw=randwrite 00:13:17.139 time_based=1 00:13:17.139 runtime=10 00:13:17.139 ioengine=libaio 00:13:17.139 direct=1 00:13:17.139 bs=131072 00:13:17.139 iodepth=8 00:13:17.139 norandommap=0 00:13:17.139 numjobs=1 00:13:17.139 00:13:17.139 verify_dump=1 00:13:17.139 verify_backlog=512 00:13:17.139 verify_state_save=0 00:13:17.139 do_verify=1 00:13:17.139 verify=crc32c-intel 00:13:17.139 [job0] 00:13:17.139 filename=/dev/sdb 00:13:17.139 [job1] 00:13:17.139 filename=/dev/sdf 00:13:17.139 [job2] 00:13:17.139 filename=/dev/sdj 00:13:17.139 [job3] 00:13:17.139 filename=/dev/sdq 00:13:17.139 [job4] 00:13:17.139 filename=/dev/sdv 00:13:17.139 [job5] 00:13:17.139 filename=/dev/sdab 00:13:17.139 [job6] 00:13:17.139 filename=/dev/sdae 00:13:17.139 [job7] 00:13:17.139 filename=/dev/sdak 00:13:17.139 [job8] 00:13:17.139 filename=/dev/sdat 00:13:17.139 [job9] 00:13:17.139 filename=/dev/sdaz 00:13:17.139 [job10] 00:13:17.139 filename=/dev/sdd 00:13:17.139 [job11] 00:13:17.139 filename=/dev/sdk 00:13:17.139 [job12] 00:13:17.139 filename=/dev/sdp 00:13:17.139 [job13] 00:13:17.139 filename=/dev/sdw 00:13:17.139 [job14] 00:13:17.139 filename=/dev/sdad 00:13:17.139 [job15] 00:13:17.139 filename=/dev/sdaj 00:13:17.139 [job16] 00:13:17.139 filename=/dev/sdap 00:13:17.139 [job17] 00:13:17.139 filename=/dev/sdaw 00:13:17.139 [job18] 00:13:17.139 filename=/dev/sdbd 00:13:17.139 [job19] 00:13:17.139 filename=/dev/sdbj 00:13:17.139 [job20] 00:13:17.139 filename=/dev/sdi 00:13:17.139 [job21] 00:13:17.139 filename=/dev/sdn 00:13:17.139 [job22] 00:13:17.139 filename=/dev/sdt 00:13:17.139 [job23] 00:13:17.139 filename=/dev/sdz 00:13:17.139 [job24] 00:13:17.139 filename=/dev/sdah 00:13:17.139 [job25] 00:13:17.139 filename=/dev/sdan 00:13:17.139 [job26] 00:13:17.139 filename=/dev/sdau 00:13:17.140 [job27] 00:13:17.140 filename=/dev/sdbc 00:13:17.140 [job28] 00:13:17.140 filename=/dev/sdbi 00:13:17.140 [job29] 00:13:17.140 filename=/dev/sdbn 00:13:17.140 [job30] 00:13:17.140 filename=/dev/sdg 00:13:17.140 [job31] 00:13:17.140 filename=/dev/sdm 00:13:17.140 [job32] 00:13:17.140 filename=/dev/sds 00:13:17.140 [job33] 00:13:17.140 filename=/dev/sdy 00:13:17.140 [job34] 00:13:17.140 filename=/dev/sdac 00:13:17.140 [job35] 00:13:17.140 filename=/dev/sdai 00:13:17.140 [job36] 00:13:17.140 filename=/dev/sdao 00:13:17.140 [job37] 00:13:17.140 filename=/dev/sdar 00:13:17.140 [job38] 00:13:17.140 filename=/dev/sday 00:13:17.140 [job39] 00:13:17.140 filename=/dev/sdbf 00:13:17.140 [job40] 00:13:17.140 filename=/dev/sdo 00:13:17.140 [job41] 00:13:17.140 filename=/dev/sdu 00:13:17.140 [job42] 00:13:17.140 filename=/dev/sdaa 00:13:17.140 [job43] 00:13:17.140 filename=/dev/sdag 00:13:17.140 [job44] 00:13:17.140 filename=/dev/sdam 00:13:17.140 [job45] 00:13:17.140 filename=/dev/sdaq 00:13:17.140 [job46] 00:13:17.140 filename=/dev/sdav 00:13:17.140 [job47] 00:13:17.140 filename=/dev/sdbb 00:13:17.140 [job48] 00:13:17.140 filename=/dev/sdbg 00:13:17.140 [job49] 00:13:17.140 filename=/dev/sdbm 00:13:17.140 [job50] 00:13:17.140 filename=/dev/sdax 00:13:17.140 [job51] 00:13:17.140 filename=/dev/sdbe 00:13:17.140 [job52] 00:13:17.140 filename=/dev/sdbk 00:13:17.140 [job53] 00:13:17.140 filename=/dev/sdbo 00:13:17.140 [job54] 00:13:17.140 filename=/dev/sdbq 00:13:17.140 [job55] 00:13:17.140 filename=/dev/sdbr 00:13:17.140 [job56] 00:13:17.140 filename=/dev/sdbt 00:13:17.140 [job57] 00:13:17.140 filename=/dev/sdbv 00:13:17.140 [job58] 00:13:17.140 filename=/dev/sdby 00:13:17.140 [job59] 00:13:17.140 filename=/dev/sdca 00:13:17.140 [job60] 00:13:17.140 filename=/dev/sdba 00:13:17.140 [job61] 00:13:17.140 filename=/dev/sdbh 00:13:17.140 [job62] 00:13:17.140 filename=/dev/sdbl 00:13:17.140 [job63] 00:13:17.140 filename=/dev/sdbp 00:13:17.140 [job64] 00:13:17.140 filename=/dev/sdbs 00:13:17.140 [job65] 00:13:17.140 filename=/dev/sdbu 00:13:17.140 [job66] 00:13:17.140 filename=/dev/sdbw 00:13:17.140 [job67] 00:13:17.140 filename=/dev/sdbx 00:13:17.140 [job68] 00:13:17.140 filename=/dev/sdbz 00:13:17.140 [job69] 00:13:17.140 filename=/dev/sdcb 00:13:17.140 [job70] 00:13:17.140 filename=/dev/sdcc 00:13:17.140 [job71] 00:13:17.140 filename=/dev/sdcg 00:13:17.140 [job72] 00:13:17.140 filename=/dev/sdci 00:13:17.140 [job73] 00:13:17.140 filename=/dev/sdcl 00:13:17.140 [job74] 00:13:17.140 filename=/dev/sdcm 00:13:17.140 [job75] 00:13:17.140 filename=/dev/sdcn 00:13:17.140 [job76] 00:13:17.140 filename=/dev/sdcp 00:13:17.140 [job77] 00:13:17.140 filename=/dev/sdcr 00:13:17.140 [job78] 00:13:17.140 filename=/dev/sdct 00:13:17.140 [job79] 00:13:17.140 filename=/dev/sdcv 00:13:17.140 [job80] 00:13:17.140 filename=/dev/sdcd 00:13:17.140 [job81] 00:13:17.140 filename=/dev/sdce 00:13:17.140 [job82] 00:13:17.140 filename=/dev/sdcf 00:13:17.140 [job83] 00:13:17.140 filename=/dev/sdch 00:13:17.140 [job84] 00:13:17.140 filename=/dev/sdcj 00:13:17.140 [job85] 00:13:17.140 filename=/dev/sdck 00:13:17.140 [job86] 00:13:17.140 filename=/dev/sdco 00:13:17.140 [job87] 00:13:17.140 filename=/dev/sdcq 00:13:17.140 [job88] 00:13:17.140 filename=/dev/sdcs 00:13:17.140 [job89] 00:13:17.140 filename=/dev/sdcu 00:13:17.140 [job90] 00:13:17.140 filename=/dev/sda 00:13:17.140 [job91] 00:13:17.140 filename=/dev/sdc 00:13:17.140 [job92] 00:13:17.140 filename=/dev/sde 00:13:17.140 [job93] 00:13:17.140 filename=/dev/sdh 00:13:17.140 [job94] 00:13:17.140 filename=/dev/sdl 00:13:17.140 [job95] 00:13:17.140 filename=/dev/sdr 00:13:17.400 [job96] 00:13:17.400 filename=/dev/sdx 00:13:17.400 [job97] 00:13:17.400 filename=/dev/sdaf 00:13:17.400 [job98] 00:13:17.400 filename=/dev/sdal 00:13:17.400 [job99] 00:13:17.400 filename=/dev/sdas 00:13:18.779 queue_depth set to 113 (sdb) 00:13:18.779 queue_depth set to 113 (sdf) 00:13:18.779 queue_depth set to 113 (sdj) 00:13:18.779 queue_depth set to 113 (sdq) 00:13:18.779 queue_depth set to 113 (sdv) 00:13:18.779 queue_depth set to 113 (sdab) 00:13:18.779 queue_depth set to 113 (sdae) 00:13:18.779 queue_depth set to 113 (sdak) 00:13:18.779 queue_depth set to 113 (sdat) 00:13:18.779 queue_depth set to 113 (sdaz) 00:13:19.038 queue_depth set to 113 (sdd) 00:13:19.038 queue_depth set to 113 (sdk) 00:13:19.038 queue_depth set to 113 (sdp) 00:13:19.038 queue_depth set to 113 (sdw) 00:13:19.038 queue_depth set to 113 (sdad) 00:13:19.038 queue_depth set to 113 (sdaj) 00:13:19.038 queue_depth set to 113 (sdap) 00:13:19.038 queue_depth set to 113 (sdaw) 00:13:19.038 queue_depth set to 113 (sdbd) 00:13:19.038 queue_depth set to 113 (sdbj) 00:13:19.038 queue_depth set to 113 (sdi) 00:13:19.038 queue_depth set to 113 (sdn) 00:13:19.038 queue_depth set to 113 (sdt) 00:13:19.297 queue_depth set to 113 (sdz) 00:13:19.297 queue_depth set to 113 (sdah) 00:13:19.297 queue_depth set to 113 (sdan) 00:13:19.297 queue_depth set to 113 (sdau) 00:13:19.297 queue_depth set to 113 (sdbc) 00:13:19.297 queue_depth set to 113 (sdbi) 00:13:19.297 queue_depth set to 113 (sdbn) 00:13:19.297 queue_depth set to 113 (sdg) 00:13:19.297 queue_depth set to 113 (sdm) 00:13:19.297 queue_depth set to 113 (sds) 00:13:19.297 queue_depth set to 113 (sdy) 00:13:19.297 queue_depth set to 113 (sdac) 00:13:19.297 queue_depth set to 113 (sdai) 00:13:19.557 queue_depth set to 113 (sdao) 00:13:19.557 queue_depth set to 113 (sdar) 00:13:19.557 queue_depth set to 113 (sday) 00:13:19.557 queue_depth set to 113 (sdbf) 00:13:19.557 queue_depth set to 113 (sdo) 00:13:19.557 queue_depth set to 113 (sdu) 00:13:19.557 queue_depth set to 113 (sdaa) 00:13:19.557 queue_depth set to 113 (sdag) 00:13:19.557 queue_depth set to 113 (sdam) 00:13:19.557 queue_depth set to 113 (sdaq) 00:13:19.557 queue_depth set to 113 (sdav) 00:13:19.557 queue_depth set to 113 (sdbb) 00:13:19.557 queue_depth set to 113 (sdbg) 00:13:19.816 queue_depth set to 113 (sdbm) 00:13:19.816 queue_depth set to 113 (sdax) 00:13:19.816 queue_depth set to 113 (sdbe) 00:13:19.816 queue_depth set to 113 (sdbk) 00:13:19.816 queue_depth set to 113 (sdbo) 00:13:19.816 queue_depth set to 113 (sdbq) 00:13:19.816 queue_depth set to 113 (sdbr) 00:13:19.816 queue_depth set to 113 (sdbt) 00:13:19.816 queue_depth set to 113 (sdbv) 00:13:19.816 queue_depth set to 113 (sdby) 00:13:19.816 queue_depth set to 113 (sdca) 00:13:19.816 queue_depth set to 113 (sdba) 00:13:20.075 queue_depth set to 113 (sdbh) 00:13:20.075 queue_depth set to 113 (sdbl) 00:13:20.075 queue_depth set to 113 (sdbp) 00:13:20.075 queue_depth set to 113 (sdbs) 00:13:20.075 queue_depth set to 113 (sdbu) 00:13:20.075 queue_depth set to 113 (sdbw) 00:13:20.075 queue_depth set to 113 (sdbx) 00:13:20.075 queue_depth set to 113 (sdbz) 00:13:20.075 queue_depth set to 113 (sdcb) 00:13:20.075 queue_depth set to 113 (sdcc) 00:13:20.075 queue_depth set to 113 (sdcg) 00:13:20.075 queue_depth set to 113 (sdci) 00:13:20.075 queue_depth set to 113 (sdcl) 00:13:20.333 queue_depth set to 113 (sdcm) 00:13:20.333 queue_depth set to 113 (sdcn) 00:13:20.333 queue_depth set to 113 (sdcp) 00:13:20.333 queue_depth set to 113 (sdcr) 00:13:20.333 queue_depth set to 113 (sdct) 00:13:20.333 queue_depth set to 113 (sdcv) 00:13:20.333 queue_depth set to 113 (sdcd) 00:13:20.333 queue_depth set to 113 (sdce) 00:13:20.333 queue_depth set to 113 (sdcf) 00:13:20.333 queue_depth set to 113 (sdch) 00:13:20.333 queue_depth set to 113 (sdcj) 00:13:20.333 queue_depth set to 113 (sdck) 00:13:20.333 queue_depth set to 113 (sdco) 00:13:20.592 queue_depth set to 113 (sdcq) 00:13:20.592 queue_depth set to 113 (sdcs) 00:13:20.592 queue_depth set to 113 (sdcu) 00:13:20.592 queue_depth set to 113 (sda) 00:13:20.592 queue_depth set to 113 (sdc) 00:13:20.592 queue_depth set to 113 (sde) 00:13:20.592 queue_depth set to 113 (sdh) 00:13:20.592 queue_depth set to 113 (sdl) 00:13:20.592 queue_depth set to 113 (sdr) 00:13:20.592 queue_depth set to 113 (sdx) 00:13:20.592 queue_depth set to 113 (sdaf) 00:13:20.592 queue_depth set to 113 (sdal) 00:13:20.592 queue_depth set to 113 (sdas) 00:13:20.851 job0: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job1: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job2: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job3: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job4: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job5: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job6: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job7: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job8: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job9: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job10: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job11: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job12: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job13: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job14: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job15: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job16: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job17: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job18: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.851 job19: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job20: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job21: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job22: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job23: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job24: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job25: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job26: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job27: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job28: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job29: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job30: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job31: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job32: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job33: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job34: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job35: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job36: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job37: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job38: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job39: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job40: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job41: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job42: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job43: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job44: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job45: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job46: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job47: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job48: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job49: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job50: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job51: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job52: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job53: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job54: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job55: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job56: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job57: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job58: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job59: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job60: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job61: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job62: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job63: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job64: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job65: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job66: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job67: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job68: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job69: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:20.852 job70: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job71: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job72: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job73: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job74: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job75: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job76: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job77: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job78: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job79: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job80: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job81: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job82: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job83: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job84: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job85: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job86: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job87: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job88: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job89: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job90: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job91: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job92: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job93: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job94: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job95: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job96: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job97: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job98: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 job99: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:13:21.112 fio-3.35 00:13:21.112 Starting 100 threads 00:13:21.112 [2024-07-23 22:14:53.152732] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.112 [2024-07-23 22:14:53.156146] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.112 [2024-07-23 22:14:53.159621] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.112 [2024-07-23 22:14:53.161586] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.112 [2024-07-23 22:14:53.163450] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.112 [2024-07-23 22:14:53.165373] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.112 [2024-07-23 22:14:53.167129] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.112 [2024-07-23 22:14:53.168941] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.112 [2024-07-23 22:14:53.170777] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.112 [2024-07-23 22:14:53.172660] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.174771] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.176679] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.178484] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.180458] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.182516] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.184428] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.186277] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.188076] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.189932] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.191731] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.193824] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.196552] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.199061] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.201241] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.204368] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.206655] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.209155] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.215488] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.217577] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.220194] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.222637] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.224533] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.226329] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.228221] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.230027] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.231943] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.233880] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.235706] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.237683] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.239561] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.241533] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.243329] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.245254] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.247025] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.249922] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.251753] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.253523] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.255349] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.257259] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.259033] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.261936] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.264206] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.266375] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.268534] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.270443] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.272373] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.274262] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.276178] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.278039] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.282195] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.285488] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.288469] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.291204] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.294168] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.296656] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.299576] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.301782] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.113 [2024-07-23 22:14:53.305287] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.307816] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.310697] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.314953] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.319329] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.321588] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.323412] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.325221] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.327136] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.328904] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.330610] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.332493] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.334172] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.335811] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.337851] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.339687] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.341566] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.343460] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.345406] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.347441] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.349469] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.351397] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.353222] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.355127] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.357138] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.359055] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.361009] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.362939] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.366845] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.368703] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.370553] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.372622] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.385 [2024-07-23 22:14:53.374600] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:25.575 [2024-07-23 22:14:57.647835] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:25.575 [2024-07-23 22:14:57.689268] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:25.575 [2024-07-23 22:14:57.725217] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:25.575 [2024-07-23 22:14:57.763485] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:25.834 [2024-07-23 22:14:57.801345] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:25.834 [2024-07-23 22:14:57.872131] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:25.834 [2024-07-23 22:14:57.912714] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.093 [2024-07-23 22:14:58.091113] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.093 [2024-07-23 22:14:58.186886] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.351 [2024-07-23 22:14:58.298549] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.351 [2024-07-23 22:14:58.409303] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.351 [2024-07-23 22:14:58.513470] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.351 [2024-07-23 22:14:58.537093] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.609 [2024-07-23 22:14:58.565617] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.609 [2024-07-23 22:14:58.595347] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.609 [2024-07-23 22:14:58.623211] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.609 [2024-07-23 22:14:58.650431] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.609 [2024-07-23 22:14:58.691691] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.609 [2024-07-23 22:14:58.740871] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.867 [2024-07-23 22:14:58.845500] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.867 [2024-07-23 22:14:58.931873] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.867 [2024-07-23 22:14:58.972383] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:26.867 [2024-07-23 22:14:59.023683] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.124 [2024-07-23 22:14:59.079269] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.124 [2024-07-23 22:14:59.109945] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.124 [2024-07-23 22:14:59.181272] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.124 [2024-07-23 22:14:59.228771] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.382 [2024-07-23 22:14:59.319762] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.382 [2024-07-23 22:14:59.381809] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.382 [2024-07-23 22:14:59.441192] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.382 [2024-07-23 22:14:59.492245] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.382 [2024-07-23 22:14:59.530284] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.382 [2024-07-23 22:14:59.572627] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.641 [2024-07-23 22:14:59.612935] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.641 [2024-07-23 22:14:59.674744] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.641 [2024-07-23 22:14:59.752062] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.899 [2024-07-23 22:14:59.839433] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.899 [2024-07-23 22:14:59.897462] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.899 [2024-07-23 22:14:59.954411] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:27.899 [2024-07-23 22:15:00.030546] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.157 [2024-07-23 22:15:00.103574] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.157 [2024-07-23 22:15:00.137604] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.157 [2024-07-23 22:15:00.206419] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.157 [2024-07-23 22:15:00.262646] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.157 [2024-07-23 22:15:00.344775] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.416 [2024-07-23 22:15:00.396815] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.416 [2024-07-23 22:15:00.452325] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.416 [2024-07-23 22:15:00.505238] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.674 [2024-07-23 22:15:00.634985] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.674 [2024-07-23 22:15:00.705815] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.674 [2024-07-23 22:15:00.795489] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.674 [2024-07-23 22:15:00.864508] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.932 [2024-07-23 22:15:00.938519] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.932 [2024-07-23 22:15:00.989308] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.932 [2024-07-23 22:15:01.015029] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.932 [2024-07-23 22:15:01.057762] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:28.932 [2024-07-23 22:15:01.089968] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.191 [2024-07-23 22:15:01.164677] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.191 [2024-07-23 22:15:01.185257] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.191 [2024-07-23 22:15:01.229432] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.191 [2024-07-23 22:15:01.257331] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.191 [2024-07-23 22:15:01.307543] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.191 [2024-07-23 22:15:01.339932] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.191 [2024-07-23 22:15:01.375631] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.449 [2024-07-23 22:15:01.405807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.449 [2024-07-23 22:15:01.458461] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.449 [2024-07-23 22:15:01.531618] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.449 [2024-07-23 22:15:01.553966] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.449 [2024-07-23 22:15:01.593652] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.449 [2024-07-23 22:15:01.618434] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.708 [2024-07-23 22:15:01.692783] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.708 [2024-07-23 22:15:01.732174] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.708 [2024-07-23 22:15:01.790182] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.708 [2024-07-23 22:15:01.880498] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.967 [2024-07-23 22:15:01.957639] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.967 [2024-07-23 22:15:02.024545] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:29.967 [2024-07-23 22:15:02.079066] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:30.225 [2024-07-23 22:15:02.176897] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:30.225 [2024-07-23 22:15:02.261662] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:30.225 [2024-07-23 22:15:02.325819] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:30.225 [2024-07-23 22:15:02.377966] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:30.484 [2024-07-23 22:15:02.487721] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:30.484 [2024-07-23 22:15:02.542721] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:30.484 [2024-07-23 22:15:02.596307] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:30.484 [2024-07-23 22:15:02.641378] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:30.743 [2024-07-23 22:15:02.688664] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:30.743 [2024-07-23 22:15:02.755648] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:30.743 [2024-07-23 22:15:02.799040] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:30.743 [2024-07-23 22:15:02.888760] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:30.743 [2024-07-23 22:15:02.926990] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.001 [2024-07-23 22:15:02.982340] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.001 [2024-07-23 22:15:03.014663] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.001 [2024-07-23 22:15:03.075826] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.001 [2024-07-23 22:15:03.134138] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.261 [2024-07-23 22:15:03.230364] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.262 [2024-07-23 22:15:03.315969] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.262 [2024-07-23 22:15:03.394660] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.262 [2024-07-23 22:15:03.450674] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.520 [2024-07-23 22:15:03.563575] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.520 [2024-07-23 22:15:03.601305] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.520 [2024-07-23 22:15:03.694179] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.778 [2024-07-23 22:15:03.751274] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.778 [2024-07-23 22:15:03.821410] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.778 [2024-07-23 22:15:03.881029] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.778 [2024-07-23 22:15:03.937965] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.036 [2024-07-23 22:15:04.000927] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.036 [2024-07-23 22:15:04.058925] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.036 [2024-07-23 22:15:04.102021] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.036 [2024-07-23 22:15:04.148256] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.036 [2024-07-23 22:15:04.199233] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.295 [2024-07-23 22:15:04.237740] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.295 [2024-07-23 22:15:04.276099] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.295 [2024-07-23 22:15:04.382112] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.295 [2024-07-23 22:15:04.459483] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.554 [2024-07-23 22:15:04.512084] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.554 [2024-07-23 22:15:04.566379] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.554 [2024-07-23 22:15:04.628952] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.554 [2024-07-23 22:15:04.730097] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.813 [2024-07-23 22:15:04.812531] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.813 [2024-07-23 22:15:04.870593] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:32.813 [2024-07-23 22:15:04.947842] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.073 [2024-07-23 22:15:05.094397] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.073 [2024-07-23 22:15:05.141090] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.073 [2024-07-23 22:15:05.244345] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.332 [2024-07-23 22:15:05.302179] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.332 [2024-07-23 22:15:05.358178] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.332 [2024-07-23 22:15:05.426273] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.332 [2024-07-23 22:15:05.477977] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.332 [2024-07-23 22:15:05.510098] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.591 [2024-07-23 22:15:05.575375] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.591 [2024-07-23 22:15:05.685491] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.850 [2024-07-23 22:15:05.805618] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.850 [2024-07-23 22:15:05.872348] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.850 [2024-07-23 22:15:05.909561] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.851 [2024-07-23 22:15:05.946278] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:33.851 [2024-07-23 22:15:06.016398] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.110 [2024-07-23 22:15:06.078099] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.110 [2024-07-23 22:15:06.145187] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.110 [2024-07-23 22:15:06.233093] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.369 [2024-07-23 22:15:06.319006] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.369 [2024-07-23 22:15:06.360752] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.369 [2024-07-23 22:15:06.464126] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.369 [2024-07-23 22:15:06.512163] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.369 [2024-07-23 22:15:06.561541] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.628 [2024-07-23 22:15:06.619336] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.628 [2024-07-23 22:15:06.692238] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.628 [2024-07-23 22:15:06.745955] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.888 [2024-07-23 22:15:06.853604] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.888 [2024-07-23 22:15:06.905226] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.888 [2024-07-23 22:15:06.962420] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.888 [2024-07-23 22:15:07.010757] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:34.888 [2024-07-23 22:15:07.079133] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.114595] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.146872] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.170152] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.176820] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.178948] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.181001] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.183125] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.185065] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.186700] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.188131] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.189583] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.191251] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.193361] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.194868] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.196336] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.197780] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.199271] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.200669] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.202079] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.203545] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.204980] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.206374] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.207885] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.209354] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.210735] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.212279] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 [2024-07-23 22:15:07.214366] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.158 00:13:35.158 job0: (groupid=0, jobs=1): err= 0: pid=84649: Tue Jul 23 22:15:07 2024 00:13:35.158 read: IOPS=86, BW=10.8MiB/s (11.3MB/s)(100MiB/9257msec) 00:13:35.158 slat (usec): min=5, max=3204, avg=61.69, stdev=180.58 00:13:35.158 clat (msec): min=2, max=199, avg=15.90, stdev=18.50 00:13:35.158 lat (msec): min=2, max=199, avg=15.96, stdev=18.50 00:13:35.158 clat percentiles (msec): 00:13:35.158 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:13:35.158 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:13:35.158 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 25], 95.00th=[ 35], 00:13:35.158 | 99.00th=[ 84], 99.50th=[ 153], 99.90th=[ 199], 99.95th=[ 199], 00:13:35.158 | 99.99th=[ 199] 00:13:35.158 write: IOPS=112, BW=14.0MiB/s (14.7MB/s)(119MiB/8460msec); 0 zone resets 00:13:35.158 slat (usec): min=34, max=6021, avg=150.03, stdev=335.93 00:13:35.158 clat (msec): min=4, max=225, avg=70.28, stdev=30.48 00:13:35.158 lat (msec): min=4, max=225, avg=70.43, stdev=30.48 00:13:35.158 clat percentiles (msec): 00:13:35.158 | 1.00th=[ 9], 5.00th=[ 21], 10.00th=[ 49], 20.00th=[ 53], 00:13:35.158 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 71], 00:13:35.158 | 70.00th=[ 78], 80.00th=[ 86], 90.00th=[ 101], 95.00th=[ 122], 00:13:35.158 | 99.00th=[ 201], 99.50th=[ 222], 99.90th=[ 226], 99.95th=[ 226], 00:13:35.158 | 99.99th=[ 226] 00:13:35.158 bw ( KiB/s): min= 2304, max=29184, per=0.84%, avg=12055.15, stdev=6147.94, samples=20 00:13:35.158 iops : min= 18, max= 228, avg=94.10, stdev=48.03, samples=20 00:13:35.158 lat (msec) : 4=0.06%, 10=15.89%, 20=25.49%, 50=11.09%, 100=41.54% 00:13:35.158 lat (msec) : 250=5.94% 00:13:35.158 cpu : usr=0.63%, sys=0.31%, ctx=2879, majf=0, minf=3 00:13:35.158 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.158 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.158 issued rwts: total=800,950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.158 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.158 job1: (groupid=0, jobs=1): err= 0: pid=84650: Tue Jul 23 22:15:07 2024 00:13:35.158 read: IOPS=90, BW=11.3MiB/s (11.9MB/s)(100MiB/8838msec) 00:13:35.158 slat (usec): min=5, max=1581, avg=53.27, stdev=121.87 00:13:35.158 clat (usec): min=3325, max=86101, avg=12247.48, stdev=10285.77 00:13:35.158 lat (usec): min=3344, max=86108, avg=12300.75, stdev=10280.48 00:13:35.158 clat percentiles (usec): 00:13:35.158 | 1.00th=[ 3949], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6521], 00:13:35.158 | 30.00th=[ 7373], 40.00th=[ 8160], 50.00th=[ 9765], 60.00th=[10945], 00:13:35.158 | 70.00th=[11994], 80.00th=[14484], 90.00th=[20579], 95.00th=[27657], 00:13:35.158 | 99.00th=[66847], 99.50th=[71828], 99.90th=[86508], 99.95th=[86508], 00:13:35.158 | 99.99th=[86508] 00:13:35.158 write: IOPS=97, BW=12.2MiB/s (12.7MB/s)(107MiB/8793msec); 0 zone resets 00:13:35.158 slat (usec): min=36, max=2979, avg=132.83, stdev=185.79 00:13:35.158 clat (msec): min=21, max=343, avg=81.61, stdev=41.62 00:13:35.158 lat (msec): min=21, max=343, avg=81.74, stdev=41.61 00:13:35.158 clat percentiles (msec): 00:13:35.158 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 53], 00:13:35.158 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 79], 00:13:35.158 | 70.00th=[ 87], 80.00th=[ 100], 90.00th=[ 122], 95.00th=[ 171], 00:13:35.158 | 99.00th=[ 255], 99.50th=[ 284], 99.90th=[ 342], 99.95th=[ 342], 00:13:35.158 | 99.99th=[ 342] 00:13:35.158 bw ( KiB/s): min= 2816, max=19968, per=0.75%, avg=10816.16, stdev=5305.38, samples=19 00:13:35.158 iops : min= 22, max= 156, avg=84.37, stdev=41.47, samples=19 00:13:35.158 lat (msec) : 4=0.48%, 10=24.47%, 20=18.25%, 50=11.90%, 100=34.74% 00:13:35.158 lat (msec) : 250=9.55%, 500=0.60% 00:13:35.158 cpu : usr=0.64%, sys=0.24%, ctx=2757, majf=0, minf=7 00:13:35.158 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.158 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.158 issued rwts: total=800,855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.158 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.158 job2: (groupid=0, jobs=1): err= 0: pid=84673: Tue Jul 23 22:15:07 2024 00:13:35.158 read: IOPS=91, BW=11.5MiB/s (12.0MB/s)(100MiB/8725msec) 00:13:35.158 slat (usec): min=5, max=2799, avg=50.49, stdev=138.98 00:13:35.158 clat (usec): min=3523, max=77275, avg=13606.56, stdev=9877.20 00:13:35.158 lat (usec): min=3674, max=77286, avg=13657.05, stdev=9873.82 00:13:35.158 clat percentiles (usec): 00:13:35.158 | 1.00th=[ 4015], 5.00th=[ 5080], 10.00th=[ 5800], 20.00th=[ 7504], 00:13:35.158 | 30.00th=[ 8848], 40.00th=[10028], 50.00th=[11207], 60.00th=[12387], 00:13:35.158 | 70.00th=[14091], 80.00th=[16909], 90.00th=[22414], 95.00th=[31589], 00:13:35.158 | 99.00th=[68682], 99.50th=[71828], 99.90th=[77071], 99.95th=[77071], 00:13:35.159 | 99.99th=[77071] 00:13:35.159 write: IOPS=101, BW=12.7MiB/s (13.3MB/s)(110MiB/8644msec); 0 zone resets 00:13:35.159 slat (usec): min=36, max=5421, avg=134.66, stdev=272.24 00:13:35.159 clat (msec): min=25, max=297, avg=78.10, stdev=34.20 00:13:35.159 lat (msec): min=26, max=298, avg=78.24, stdev=34.20 00:13:35.159 clat percentiles (msec): 00:13:35.159 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 54], 00:13:35.159 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 74], 00:13:35.159 | 70.00th=[ 82], 80.00th=[ 95], 90.00th=[ 117], 95.00th=[ 148], 00:13:35.159 | 99.00th=[ 213], 99.50th=[ 257], 99.90th=[ 300], 99.95th=[ 300], 00:13:35.159 | 99.99th=[ 300] 00:13:35.159 bw ( KiB/s): min= 2299, max=18981, per=0.77%, avg=11029.63, stdev=5056.72, samples=19 00:13:35.159 iops : min= 17, max= 148, avg=85.89, stdev=39.63, samples=19 00:13:35.159 lat (msec) : 4=0.36%, 10=18.49%, 20=22.60%, 50=11.63%, 100=38.46% 00:13:35.159 lat (msec) : 250=8.17%, 500=0.30% 00:13:35.159 cpu : usr=0.53%, sys=0.34%, ctx=2787, majf=0, minf=3 00:13:35.159 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.159 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.159 issued rwts: total=800,877,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.159 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.159 job3: (groupid=0, jobs=1): err= 0: pid=84725: Tue Jul 23 22:15:07 2024 00:13:35.159 read: IOPS=94, BW=11.8MiB/s (12.3MB/s)(106MiB/8993msec) 00:13:35.159 slat (usec): min=5, max=743, avg=49.14, stdev=90.14 00:13:35.159 clat (usec): min=3886, max=36379, avg=11127.16, stdev=5100.64 00:13:35.159 lat (usec): min=3924, max=36391, avg=11176.29, stdev=5094.95 00:13:35.159 clat percentiles (usec): 00:13:35.159 | 1.00th=[ 4883], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 7177], 00:13:35.159 | 30.00th=[ 8291], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10814], 00:13:35.159 | 70.00th=[11731], 80.00th=[13435], 90.00th=[17957], 95.00th=[21103], 00:13:35.159 | 99.00th=[31327], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:13:35.159 | 99.99th=[36439] 00:13:35.159 write: IOPS=108, BW=13.6MiB/s (14.3MB/s)(120MiB/8809msec); 0 zone resets 00:13:35.159 slat (usec): min=35, max=4508, avg=145.86, stdev=272.89 00:13:35.159 clat (msec): min=7, max=222, avg=72.83, stdev=28.09 00:13:35.159 lat (msec): min=7, max=222, avg=72.97, stdev=28.09 00:13:35.159 clat percentiles (msec): 00:13:35.159 | 1.00th=[ 19], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 54], 00:13:35.159 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 72], 00:13:35.159 | 70.00th=[ 79], 80.00th=[ 92], 90.00th=[ 110], 95.00th=[ 127], 00:13:35.159 | 99.00th=[ 180], 99.50th=[ 192], 99.90th=[ 224], 99.95th=[ 224], 00:13:35.159 | 99.99th=[ 224] 00:13:35.159 bw ( KiB/s): min= 4096, max=22784, per=0.85%, avg=12242.63, stdev=4969.61, samples=19 00:13:35.159 iops : min= 32, max= 178, avg=95.53, stdev=38.75, samples=19 00:13:35.159 lat (msec) : 4=0.11%, 10=22.58%, 20=22.08%, 50=8.63%, 100=39.07% 00:13:35.159 lat (msec) : 250=7.53% 00:13:35.159 cpu : usr=0.64%, sys=0.36%, ctx=3018, majf=0, minf=3 00:13:35.159 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.159 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.159 issued rwts: total=847,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.159 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.159 job4: (groupid=0, jobs=1): err= 0: pid=84815: Tue Jul 23 22:15:07 2024 00:13:35.159 read: IOPS=92, BW=11.6MiB/s (12.2MB/s)(100MiB/8611msec) 00:13:35.159 slat (usec): min=6, max=919, avg=45.53, stdev=92.93 00:13:35.159 clat (msec): min=3, max=270, avg=17.44, stdev=31.01 00:13:35.159 lat (msec): min=3, max=270, avg=17.49, stdev=31.01 00:13:35.159 clat percentiles (msec): 00:13:35.159 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:13:35.159 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:13:35.159 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 23], 95.00th=[ 41], 00:13:35.159 | 99.00th=[ 230], 99.50th=[ 245], 99.90th=[ 271], 99.95th=[ 271], 00:13:35.159 | 99.99th=[ 271] 00:13:35.159 write: IOPS=98, BW=12.3MiB/s (12.9MB/s)(102MiB/8303msec); 0 zone resets 00:13:35.159 slat (usec): min=29, max=30827, avg=174.91, stdev=1134.37 00:13:35.159 clat (msec): min=20, max=322, avg=80.28, stdev=30.06 00:13:35.159 lat (msec): min=28, max=322, avg=80.46, stdev=29.99 00:13:35.159 clat percentiles (msec): 00:13:35.159 | 1.00th=[ 43], 5.00th=[ 49], 10.00th=[ 52], 20.00th=[ 57], 00:13:35.159 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 74], 60.00th=[ 83], 00:13:35.159 | 70.00th=[ 92], 80.00th=[ 101], 90.00th=[ 116], 95.00th=[ 126], 00:13:35.159 | 99.00th=[ 178], 99.50th=[ 190], 99.90th=[ 321], 99.95th=[ 321], 00:13:35.159 | 99.99th=[ 321] 00:13:35.159 bw ( KiB/s): min= 3840, max=17152, per=0.76%, avg=10896.37, stdev=4396.26, samples=19 00:13:35.159 iops : min= 30, max= 134, avg=84.95, stdev=34.43, samples=19 00:13:35.159 lat (msec) : 4=0.25%, 10=22.08%, 20=20.16%, 50=9.28%, 100=36.73% 00:13:35.159 lat (msec) : 250=11.07%, 500=0.43% 00:13:35.159 cpu : usr=0.55%, sys=0.30%, ctx=2715, majf=0, minf=5 00:13:35.159 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.159 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.159 issued rwts: total=800,817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.159 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.159 job5: (groupid=0, jobs=1): err= 0: pid=84832: Tue Jul 23 22:15:07 2024 00:13:35.159 read: IOPS=100, BW=12.6MiB/s (13.2MB/s)(100MiB/7935msec) 00:13:35.159 slat (usec): min=5, max=1538, avg=48.37, stdev=94.97 00:13:35.159 clat (msec): min=3, max=107, avg=11.39, stdev=11.64 00:13:35.159 lat (msec): min=3, max=107, avg=11.44, stdev=11.65 00:13:35.159 clat percentiles (msec): 00:13:35.159 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:13:35.159 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:13:35.159 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 19], 95.00th=[ 25], 00:13:35.159 | 99.00th=[ 46], 99.50th=[ 107], 99.90th=[ 108], 99.95th=[ 108], 00:13:35.159 | 99.99th=[ 108] 00:13:35.159 write: IOPS=91, BW=11.4MiB/s (12.0MB/s)(101MiB/8859msec); 0 zone resets 00:13:35.159 slat (usec): min=28, max=2086, avg=129.60, stdev=200.23 00:13:35.159 clat (msec): min=11, max=415, avg=87.16, stdev=40.26 00:13:35.159 lat (msec): min=11, max=415, avg=87.29, stdev=40.25 00:13:35.159 clat percentiles (msec): 00:13:35.159 | 1.00th=[ 46], 5.00th=[ 51], 10.00th=[ 56], 20.00th=[ 61], 00:13:35.159 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 87], 00:13:35.159 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 124], 95.00th=[ 142], 00:13:35.159 | 99.00th=[ 262], 99.50th=[ 338], 99.90th=[ 418], 99.95th=[ 418], 00:13:35.159 | 99.99th=[ 418] 00:13:35.159 bw ( KiB/s): min= 2565, max=16507, per=0.71%, avg=10277.89, stdev=4564.08, samples=19 00:13:35.159 iops : min= 20, max= 128, avg=79.74, stdev=35.66, samples=19 00:13:35.159 lat (msec) : 4=0.12%, 10=30.97%, 20=15.73%, 50=4.98%, 100=34.39% 00:13:35.159 lat (msec) : 250=13.25%, 500=0.56% 00:13:35.159 cpu : usr=0.54%, sys=0.29%, ctx=2670, majf=0, minf=7 00:13:35.159 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.159 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.159 issued rwts: total=800,808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.159 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.159 job6: (groupid=0, jobs=1): err= 0: pid=84961: Tue Jul 23 22:15:07 2024 00:13:35.159 read: IOPS=94, BW=11.8MiB/s (12.4MB/s)(100MiB/8439msec) 00:13:35.159 slat (usec): min=5, max=2723, avg=48.45, stdev=136.21 00:13:35.159 clat (msec): min=2, max=219, avg=13.57, stdev=24.89 00:13:35.159 lat (msec): min=2, max=219, avg=13.62, stdev=24.89 00:13:35.159 clat percentiles (msec): 00:13:35.159 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:13:35.159 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:13:35.159 | 70.00th=[ 11], 80.00th=[ 13], 90.00th=[ 16], 95.00th=[ 36], 00:13:35.159 | 99.00th=[ 112], 99.50th=[ 218], 99.90th=[ 220], 99.95th=[ 220], 00:13:35.159 | 99.99th=[ 220] 00:13:35.159 write: IOPS=94, BW=11.8MiB/s (12.4MB/s)(102MiB/8648msec); 0 zone resets 00:13:35.159 slat (usec): min=34, max=8758, avg=159.78, stdev=486.67 00:13:35.159 clat (msec): min=22, max=284, avg=84.21, stdev=32.15 00:13:35.159 lat (msec): min=22, max=284, avg=84.37, stdev=32.14 00:13:35.159 clat percentiles (msec): 00:13:35.159 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 54], 20.00th=[ 59], 00:13:35.159 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 78], 60.00th=[ 87], 00:13:35.159 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 118], 95.00th=[ 133], 00:13:35.159 | 99.00th=[ 213], 99.50th=[ 228], 99.90th=[ 284], 99.95th=[ 284], 00:13:35.159 | 99.99th=[ 284] 00:13:35.159 bw ( KiB/s): min= 2048, max=16640, per=0.73%, avg=10569.74, stdev=4157.32, samples=19 00:13:35.159 iops : min= 16, max= 130, avg=82.42, stdev=32.42, samples=19 00:13:35.159 lat (msec) : 4=1.18%, 10=31.81%, 20=13.00%, 50=4.76%, 100=36.08% 00:13:35.159 lat (msec) : 250=13.00%, 500=0.19% 00:13:35.159 cpu : usr=0.61%, sys=0.22%, ctx=2570, majf=0, minf=5 00:13:35.159 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.159 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.159 issued rwts: total=800,816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.159 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.159 job7: (groupid=0, jobs=1): err= 0: pid=84996: Tue Jul 23 22:15:07 2024 00:13:35.159 read: IOPS=90, BW=11.4MiB/s (11.9MB/s)(100MiB/8792msec) 00:13:35.159 slat (usec): min=5, max=2842, avg=48.80, stdev=129.01 00:13:35.159 clat (usec): min=4283, max=89792, avg=13590.11, stdev=9379.49 00:13:35.159 lat (usec): min=4533, max=89848, avg=13638.92, stdev=9375.86 00:13:35.159 clat percentiles (usec): 00:13:35.159 | 1.00th=[ 5014], 5.00th=[ 6456], 10.00th=[ 7635], 20.00th=[ 8979], 00:13:35.159 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[11469], 60.00th=[12387], 00:13:35.159 | 70.00th=[13566], 80.00th=[15664], 90.00th=[20841], 95.00th=[25822], 00:13:35.159 | 99.00th=[49546], 99.50th=[85459], 99.90th=[89654], 99.95th=[89654], 00:13:35.160 | 99.99th=[89654] 00:13:35.160 write: IOPS=104, BW=13.1MiB/s (13.8MB/s)(114MiB/8678msec); 0 zone resets 00:13:35.160 slat (usec): min=30, max=73387, avg=214.74, stdev=2436.67 00:13:35.160 clat (msec): min=13, max=301, avg=75.30, stdev=34.53 00:13:35.160 lat (msec): min=13, max=301, avg=75.51, stdev=34.52 00:13:35.160 clat percentiles (msec): 00:13:35.160 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 53], 00:13:35.160 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 71], 00:13:35.160 | 70.00th=[ 79], 80.00th=[ 91], 90.00th=[ 109], 95.00th=[ 140], 00:13:35.160 | 99.00th=[ 230], 99.50th=[ 245], 99.90th=[ 300], 99.95th=[ 300], 00:13:35.160 | 99.99th=[ 300] 00:13:35.160 bw ( KiB/s): min= 3832, max=19200, per=0.80%, avg=11563.05, stdev=4765.84, samples=20 00:13:35.160 iops : min= 29, max= 150, avg=90.10, stdev=37.34, samples=20 00:13:35.160 lat (msec) : 10=15.37%, 20=26.36%, 50=12.86%, 100=38.16%, 250=7.07% 00:13:35.160 lat (msec) : 500=0.18% 00:13:35.160 cpu : usr=0.72%, sys=0.19%, ctx=2861, majf=0, minf=3 00:13:35.160 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.160 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.160 issued rwts: total=800,911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.160 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.160 job8: (groupid=0, jobs=1): err= 0: pid=85148: Tue Jul 23 22:15:07 2024 00:13:35.160 read: IOPS=92, BW=11.5MiB/s (12.1MB/s)(103MiB/8955msec) 00:13:35.160 slat (usec): min=5, max=754, avg=41.68, stdev=78.43 00:13:35.160 clat (usec): min=4681, max=27037, avg=10828.57, stdev=4051.14 00:13:35.160 lat (usec): min=4756, max=27108, avg=10870.25, stdev=4053.26 00:13:35.160 clat percentiles (usec): 00:13:35.160 | 1.00th=[ 5080], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 7308], 00:13:35.160 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[10814], 00:13:35.160 | 70.00th=[11731], 80.00th=[13304], 90.00th=[16909], 95.00th=[19006], 00:13:35.160 | 99.00th=[22938], 99.50th=[23462], 99.90th=[27132], 99.95th=[27132], 00:13:35.160 | 99.99th=[27132] 00:13:35.160 write: IOPS=108, BW=13.5MiB/s (14.2MB/s)(120MiB/8871msec); 0 zone resets 00:13:35.160 slat (usec): min=36, max=12611, avg=145.97, stdev=499.53 00:13:35.160 clat (msec): min=4, max=295, avg=72.89, stdev=31.33 00:13:35.160 lat (msec): min=4, max=295, avg=73.03, stdev=31.29 00:13:35.160 clat percentiles (msec): 00:13:35.160 | 1.00th=[ 14], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 54], 00:13:35.160 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 70], 00:13:35.160 | 70.00th=[ 77], 80.00th=[ 90], 90.00th=[ 106], 95.00th=[ 132], 00:13:35.160 | 99.00th=[ 199], 99.50th=[ 255], 99.90th=[ 296], 99.95th=[ 296], 00:13:35.160 | 99.99th=[ 296] 00:13:35.160 bw ( KiB/s): min= 2560, max=22784, per=0.85%, avg=12280.05, stdev=5502.03, samples=19 00:13:35.160 iops : min= 20, max= 178, avg=95.68, stdev=42.90, samples=19 00:13:35.160 lat (msec) : 10=22.41%, 20=23.19%, 50=7.56%, 100=40.22%, 250=6.33% 00:13:35.160 lat (msec) : 500=0.28% 00:13:35.160 cpu : usr=0.61%, sys=0.34%, ctx=2847, majf=0, minf=5 00:13:35.160 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.160 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.160 issued rwts: total=825,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.160 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.160 job9: (groupid=0, jobs=1): err= 0: pid=85204: Tue Jul 23 22:15:07 2024 00:13:35.160 read: IOPS=90, BW=11.4MiB/s (11.9MB/s)(100MiB/8796msec) 00:13:35.160 slat (usec): min=5, max=4625, avg=59.42, stdev=214.33 00:13:35.160 clat (msec): min=4, max=151, avg=13.96, stdev=13.46 00:13:35.160 lat (msec): min=4, max=151, avg=14.02, stdev=13.45 00:13:35.160 clat percentiles (msec): 00:13:35.160 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:13:35.160 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:13:35.160 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 21], 95.00th=[ 24], 00:13:35.160 | 99.00th=[ 69], 99.50th=[ 126], 99.90th=[ 153], 99.95th=[ 153], 00:13:35.160 | 99.99th=[ 153] 00:13:35.160 write: IOPS=108, BW=13.6MiB/s (14.2MB/s)(117MiB/8641msec); 0 zone resets 00:13:35.160 slat (usec): min=35, max=66273, avg=206.16, stdev=2174.77 00:13:35.160 clat (usec): min=1136, max=224642, avg=72815.91, stdev=29372.30 00:13:35.160 lat (usec): min=1297, max=225461, avg=73022.07, stdev=29358.51 00:13:35.160 clat percentiles (msec): 00:13:35.160 | 1.00th=[ 13], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 53], 00:13:35.160 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 71], 00:13:35.160 | 70.00th=[ 79], 80.00th=[ 89], 90.00th=[ 105], 95.00th=[ 133], 00:13:35.160 | 99.00th=[ 194], 99.50th=[ 203], 99.90th=[ 226], 99.95th=[ 226], 00:13:35.160 | 99.99th=[ 226] 00:13:35.160 bw ( KiB/s): min= 3840, max=19712, per=0.83%, avg=11927.05, stdev=5012.34, samples=20 00:13:35.160 iops : min= 30, max= 154, avg=93.10, stdev=39.16, samples=20 00:13:35.160 lat (msec) : 2=0.12%, 4=0.12%, 10=15.18%, 20=26.28%, 50=11.33% 00:13:35.160 lat (msec) : 100=40.60%, 250=6.38% 00:13:35.160 cpu : usr=0.73%, sys=0.21%, ctx=2831, majf=0, minf=3 00:13:35.160 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.160 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.160 issued rwts: total=800,939,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.160 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.160 job10: (groupid=0, jobs=1): err= 0: pid=85228: Tue Jul 23 22:15:07 2024 00:13:35.160 read: IOPS=126, BW=15.8MiB/s (16.6MB/s)(140MiB/8835msec) 00:13:35.160 slat (usec): min=5, max=1486, avg=45.07, stdev=98.69 00:13:35.160 clat (usec): min=2032, max=82409, avg=9155.21, stdev=9661.33 00:13:35.160 lat (usec): min=2053, max=82464, avg=9200.28, stdev=9658.66 00:13:35.160 clat percentiles (usec): 00:13:35.160 | 1.00th=[ 3064], 5.00th=[ 3490], 10.00th=[ 3818], 20.00th=[ 4293], 00:13:35.160 | 30.00th=[ 4817], 40.00th=[ 5473], 50.00th=[ 6587], 60.00th=[ 7373], 00:13:35.160 | 70.00th=[ 8586], 80.00th=[10290], 90.00th=[15401], 95.00th=[22414], 00:13:35.160 | 99.00th=[63177], 99.50th=[66323], 99.90th=[79168], 99.95th=[82314], 00:13:35.160 | 99.99th=[82314] 00:13:35.160 write: IOPS=136, BW=17.1MiB/s (17.9MB/s)(149MiB/8724msec); 0 zone resets 00:13:35.160 slat (usec): min=30, max=34592, avg=177.58, stdev=1202.35 00:13:35.160 clat (msec): min=23, max=202, avg=57.87, stdev=20.16 00:13:35.160 lat (msec): min=27, max=202, avg=58.05, stdev=20.13 00:13:35.160 clat percentiles (msec): 00:13:35.160 | 1.00th=[ 33], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 42], 00:13:35.160 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 59], 00:13:35.160 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 96], 00:13:35.160 | 99.00th=[ 128], 99.50th=[ 144], 99.90th=[ 184], 99.95th=[ 203], 00:13:35.160 | 99.99th=[ 203] 00:13:35.160 bw ( KiB/s): min= 5632, max=23808, per=1.05%, avg=15179.55, stdev=5267.68, samples=20 00:13:35.160 iops : min= 44, max= 186, avg=118.50, stdev=41.17, samples=20 00:13:35.160 lat (msec) : 4=6.79%, 10=31.26%, 20=7.44%, 50=23.61%, 100=28.88% 00:13:35.160 lat (msec) : 250=2.03% 00:13:35.160 cpu : usr=0.86%, sys=0.33%, ctx=3817, majf=0, minf=1 00:13:35.160 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.160 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.160 issued rwts: total=1120,1193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.160 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.160 job11: (groupid=0, jobs=1): err= 0: pid=85291: Tue Jul 23 22:15:07 2024 00:13:35.160 read: IOPS=131, BW=16.4MiB/s (17.2MB/s)(140MiB/8546msec) 00:13:35.160 slat (usec): min=5, max=2490, avg=49.50, stdev=137.43 00:13:35.160 clat (msec): min=2, max=257, avg=10.69, stdev=21.53 00:13:35.160 lat (msec): min=2, max=257, avg=10.74, stdev=21.53 00:13:35.160 clat percentiles (msec): 00:13:35.160 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:13:35.160 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 9], 00:13:35.160 | 70.00th=[ 10], 80.00th=[ 12], 90.00th=[ 15], 95.00th=[ 20], 00:13:35.160 | 99.00th=[ 51], 99.50th=[ 255], 99.90th=[ 257], 99.95th=[ 257], 00:13:35.160 | 99.99th=[ 257] 00:13:35.160 write: IOPS=140, BW=17.5MiB/s (18.4MB/s)(149MiB/8485msec); 0 zone resets 00:13:35.160 slat (usec): min=33, max=8735, avg=131.89, stdev=326.85 00:13:35.160 clat (usec): min=679, max=156835, avg=56640.91, stdev=21860.56 00:13:35.160 lat (usec): min=753, max=156884, avg=56772.80, stdev=21849.05 00:13:35.160 clat percentiles (msec): 00:13:35.160 | 1.00th=[ 4], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 40], 00:13:35.160 | 30.00th=[ 44], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 59], 00:13:35.160 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 85], 95.00th=[ 97], 00:13:35.160 | 99.00th=[ 131], 99.50th=[ 138], 99.90th=[ 155], 99.95th=[ 157], 00:13:35.160 | 99.99th=[ 157] 00:13:35.160 bw ( KiB/s): min= 1536, max=23296, per=1.04%, avg=14987.53, stdev=5902.12, samples=19 00:13:35.160 iops : min= 12, max= 182, avg=116.95, stdev=46.11, samples=19 00:13:35.160 lat (usec) : 750=0.04%, 1000=0.04% 00:13:35.160 lat (msec) : 2=0.17%, 4=2.64%, 10=33.65%, 20=10.91%, 50=21.91% 00:13:35.160 lat (msec) : 100=28.19%, 250=2.17%, 500=0.26% 00:13:35.160 cpu : usr=0.79%, sys=0.40%, ctx=3750, majf=0, minf=3 00:13:35.160 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.160 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.160 issued rwts: total=1120,1189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.160 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.160 job12: (groupid=0, jobs=1): err= 0: pid=85297: Tue Jul 23 22:15:07 2024 00:13:35.160 read: IOPS=139, BW=17.4MiB/s (18.3MB/s)(160MiB/9172msec) 00:13:35.160 slat (usec): min=5, max=3144, avg=47.48, stdev=129.69 00:13:35.160 clat (msec): min=2, max=113, avg= 9.99, stdev=10.26 00:13:35.160 lat (msec): min=2, max=113, avg=10.04, stdev=10.27 00:13:35.160 clat percentiles (msec): 00:13:35.160 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:13:35.160 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 9], 00:13:35.161 | 70.00th=[ 10], 80.00th=[ 12], 90.00th=[ 15], 95.00th=[ 20], 00:13:35.161 | 99.00th=[ 59], 99.50th=[ 110], 99.90th=[ 112], 99.95th=[ 113], 00:13:35.161 | 99.99th=[ 113] 00:13:35.161 write: IOPS=155, BW=19.5MiB/s (20.4MB/s)(164MiB/8389msec); 0 zone resets 00:13:35.161 slat (usec): min=36, max=4748, avg=146.35, stdev=316.53 00:13:35.161 clat (msec): min=16, max=178, avg=50.78, stdev=19.88 00:13:35.161 lat (msec): min=16, max=178, avg=50.93, stdev=19.86 00:13:35.161 clat percentiles (msec): 00:13:35.161 | 1.00th=[ 26], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 37], 00:13:35.161 | 30.00th=[ 40], 40.00th=[ 43], 50.00th=[ 47], 60.00th=[ 50], 00:13:35.161 | 70.00th=[ 54], 80.00th=[ 61], 90.00th=[ 71], 95.00th=[ 87], 00:13:35.161 | 99.00th=[ 132], 99.50th=[ 165], 99.90th=[ 180], 99.95th=[ 180], 00:13:35.161 | 99.99th=[ 180] 00:13:35.161 bw ( KiB/s): min= 5888, max=27392, per=1.19%, avg=17077.16, stdev=6407.86, samples=19 00:13:35.161 iops : min= 46, max= 214, avg=133.32, stdev=50.06, samples=19 00:13:35.161 lat (msec) : 4=0.81%, 10=34.47%, 20=11.94%, 50=32.07%, 100=18.86% 00:13:35.161 lat (msec) : 250=1.85% 00:13:35.161 cpu : usr=0.87%, sys=0.47%, ctx=4315, majf=0, minf=1 00:13:35.161 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.161 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.161 issued rwts: total=1280,1308,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.161 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.161 job13: (groupid=0, jobs=1): err= 0: pid=85298: Tue Jul 23 22:15:07 2024 00:13:35.161 read: IOPS=127, BW=16.0MiB/s (16.8MB/s)(143MiB/8951msec) 00:13:35.161 slat (usec): min=5, max=1912, avg=50.35, stdev=122.69 00:13:35.161 clat (usec): min=2773, max=58712, avg=9414.55, stdev=6020.32 00:13:35.161 lat (usec): min=3514, max=58721, avg=9464.90, stdev=6015.58 00:13:35.161 clat percentiles (usec): 00:13:35.161 | 1.00th=[ 3949], 5.00th=[ 4555], 10.00th=[ 4883], 20.00th=[ 5538], 00:13:35.161 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 8029], 60.00th=[ 8848], 00:13:35.161 | 70.00th=[10028], 80.00th=[11469], 90.00th=[15008], 95.00th=[18744], 00:13:35.161 | 99.00th=[31589], 99.50th=[53216], 99.90th=[58459], 99.95th=[58459], 00:13:35.161 | 99.99th=[58459] 00:13:35.161 write: IOPS=148, BW=18.5MiB/s (19.4MB/s)(160MiB/8639msec); 0 zone resets 00:13:35.161 slat (usec): min=31, max=59220, avg=169.89, stdev=1665.53 00:13:35.161 clat (msec): min=29, max=196, avg=53.07, stdev=21.06 00:13:35.161 lat (msec): min=30, max=196, avg=53.24, stdev=21.07 00:13:35.161 clat percentiles (msec): 00:13:35.161 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 37], 00:13:35.161 | 30.00th=[ 41], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 52], 00:13:35.161 | 70.00th=[ 58], 80.00th=[ 65], 90.00th=[ 75], 95.00th=[ 89], 00:13:35.161 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 188], 99.95th=[ 197], 00:13:35.161 | 99.99th=[ 197] 00:13:35.161 bw ( KiB/s): min= 6656, max=26624, per=1.13%, avg=16305.74, stdev=6088.83, samples=19 00:13:35.161 iops : min= 52, max= 208, avg=127.16, stdev=47.57, samples=19 00:13:35.161 lat (msec) : 4=0.62%, 10=32.54%, 20=12.12%, 50=31.63%, 100=20.91% 00:13:35.161 lat (msec) : 250=2.19% 00:13:35.161 cpu : usr=0.87%, sys=0.38%, ctx=4020, majf=0, minf=5 00:13:35.161 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.161 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.161 issued rwts: total=1145,1280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.161 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.161 job14: (groupid=0, jobs=1): err= 0: pid=85299: Tue Jul 23 22:15:07 2024 00:13:35.161 read: IOPS=123, BW=15.4MiB/s (16.2MB/s)(140MiB/9086msec) 00:13:35.161 slat (usec): min=5, max=1370, avg=47.73, stdev=111.76 00:13:35.161 clat (usec): min=2390, max=88074, avg=10114.00, stdev=9175.68 00:13:35.161 lat (usec): min=2565, max=88092, avg=10161.73, stdev=9176.23 00:13:35.161 clat percentiles (usec): 00:13:35.161 | 1.00th=[ 3163], 5.00th=[ 3851], 10.00th=[ 4555], 20.00th=[ 5538], 00:13:35.161 | 30.00th=[ 6390], 40.00th=[ 7242], 50.00th=[ 8291], 60.00th=[ 8979], 00:13:35.161 | 70.00th=[ 9765], 80.00th=[11600], 90.00th=[15008], 95.00th=[19006], 00:13:35.161 | 99.00th=[48497], 99.50th=[82314], 99.90th=[87557], 99.95th=[87557], 00:13:35.161 | 99.99th=[87557] 00:13:35.161 write: IOPS=146, BW=18.3MiB/s (19.1MB/s)(156MiB/8561msec); 0 zone resets 00:13:35.161 slat (usec): min=35, max=6328, avg=127.70, stdev=280.70 00:13:35.161 clat (msec): min=9, max=218, avg=54.26, stdev=23.42 00:13:35.161 lat (msec): min=9, max=218, avg=54.38, stdev=23.42 00:13:35.161 clat percentiles (msec): 00:13:35.161 | 1.00th=[ 21], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 38], 00:13:35.161 | 30.00th=[ 41], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 54], 00:13:35.161 | 70.00th=[ 59], 80.00th=[ 66], 90.00th=[ 79], 95.00th=[ 96], 00:13:35.161 | 99.00th=[ 148], 99.50th=[ 171], 99.90th=[ 207], 99.95th=[ 220], 00:13:35.161 | 99.99th=[ 220] 00:13:35.161 bw ( KiB/s): min= 5888, max=26624, per=1.08%, avg=15585.89, stdev=6047.85, samples=19 00:13:35.161 iops : min= 46, max= 208, avg=121.63, stdev=47.31, samples=19 00:13:35.161 lat (msec) : 4=2.78%, 10=30.80%, 20=11.90%, 50=29.41%, 100=22.83% 00:13:35.161 lat (msec) : 250=2.28% 00:13:35.161 cpu : usr=0.78%, sys=0.44%, ctx=3923, majf=0, minf=3 00:13:35.161 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.161 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.161 issued rwts: total=1120,1250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.161 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.161 job15: (groupid=0, jobs=1): err= 0: pid=85300: Tue Jul 23 22:15:07 2024 00:13:35.161 read: IOPS=135, BW=16.9MiB/s (17.8MB/s)(160MiB/9446msec) 00:13:35.161 slat (usec): min=5, max=1076, avg=43.99, stdev=85.46 00:13:35.161 clat (usec): min=2187, max=97951, avg=9352.55, stdev=9410.22 00:13:35.161 lat (usec): min=2201, max=97965, avg=9396.54, stdev=9411.52 00:13:35.161 clat percentiles (usec): 00:13:35.161 | 1.00th=[ 2900], 5.00th=[ 3490], 10.00th=[ 3949], 20.00th=[ 4817], 00:13:35.161 | 30.00th=[ 5342], 40.00th=[ 5800], 50.00th=[ 6652], 60.00th=[ 7767], 00:13:35.161 | 70.00th=[ 8979], 80.00th=[10683], 90.00th=[16319], 95.00th=[21103], 00:13:35.161 | 99.00th=[62653], 99.50th=[68682], 99.90th=[79168], 99.95th=[98042], 00:13:35.161 | 99.99th=[98042] 00:13:35.161 write: IOPS=150, BW=18.9MiB/s (19.8MB/s)(161MiB/8539msec); 0 zone resets 00:13:35.161 slat (usec): min=28, max=3095, avg=133.11, stdev=236.71 00:13:35.161 clat (usec): min=765, max=170220, avg=52583.95, stdev=19637.09 00:13:35.161 lat (usec): min=894, max=170292, avg=52717.06, stdev=19647.12 00:13:35.161 clat percentiles (msec): 00:13:35.161 | 1.00th=[ 4], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 37], 00:13:35.161 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 50], 60.00th=[ 55], 00:13:35.161 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 80], 95.00th=[ 91], 00:13:35.161 | 99.00th=[ 103], 99.50th=[ 117], 99.90th=[ 169], 99.95th=[ 171], 00:13:35.161 | 99.99th=[ 171] 00:13:35.161 bw ( KiB/s): min= 8942, max=34608, per=1.14%, avg=16365.50, stdev=6114.09, samples=20 00:13:35.161 iops : min= 69, max= 270, avg=127.65, stdev=47.74, samples=20 00:13:35.161 lat (usec) : 1000=0.04% 00:13:35.161 lat (msec) : 2=0.31%, 4=5.33%, 10=34.07%, 20=8.57%, 50=26.17% 00:13:35.161 lat (msec) : 100=24.81%, 250=0.70% 00:13:35.161 cpu : usr=0.89%, sys=0.43%, ctx=4087, majf=0, minf=1 00:13:35.161 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.161 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.161 issued rwts: total=1280,1288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.161 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.161 job16: (groupid=0, jobs=1): err= 0: pid=85305: Tue Jul 23 22:15:07 2024 00:13:35.161 read: IOPS=136, BW=17.0MiB/s (17.8MB/s)(160MiB/9406msec) 00:13:35.161 slat (usec): min=5, max=5520, avg=45.68, stdev=184.75 00:13:35.161 clat (usec): min=2264, max=51690, avg=7527.17, stdev=6035.12 00:13:35.161 lat (usec): min=2352, max=51704, avg=7572.85, stdev=6032.35 00:13:35.161 clat percentiles (usec): 00:13:35.161 | 1.00th=[ 2769], 5.00th=[ 3195], 10.00th=[ 3556], 20.00th=[ 4080], 00:13:35.161 | 30.00th=[ 4752], 40.00th=[ 5211], 50.00th=[ 5735], 60.00th=[ 6325], 00:13:35.161 | 70.00th=[ 7504], 80.00th=[ 9241], 90.00th=[13698], 95.00th=[17695], 00:13:35.161 | 99.00th=[41157], 99.50th=[46924], 99.90th=[51643], 99.95th=[51643], 00:13:35.161 | 99.99th=[51643] 00:13:35.161 write: IOPS=151, BW=19.0MiB/s (19.9MB/s)(167MiB/8801msec); 0 zone resets 00:13:35.161 slat (usec): min=32, max=5997, avg=142.15, stdev=314.51 00:13:35.161 clat (msec): min=4, max=194, avg=52.24, stdev=22.32 00:13:35.161 lat (msec): min=4, max=194, avg=52.38, stdev=22.33 00:13:35.161 clat percentiles (msec): 00:13:35.161 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 37], 00:13:35.161 | 30.00th=[ 40], 40.00th=[ 44], 50.00th=[ 47], 60.00th=[ 52], 00:13:35.161 | 70.00th=[ 59], 80.00th=[ 67], 90.00th=[ 78], 95.00th=[ 88], 00:13:35.161 | 99.00th=[ 138], 99.50th=[ 171], 99.90th=[ 188], 99.95th=[ 194], 00:13:35.161 | 99.99th=[ 194] 00:13:35.161 bw ( KiB/s): min= 7424, max=35328, per=1.18%, avg=17008.85, stdev=6929.44, samples=20 00:13:35.161 iops : min= 58, max= 276, avg=132.80, stdev=54.17, samples=20 00:13:35.161 lat (msec) : 4=8.98%, 10=33.30%, 20=6.54%, 50=28.98%, 100=20.72% 00:13:35.161 lat (msec) : 250=1.49% 00:13:35.161 cpu : usr=0.93%, sys=0.41%, ctx=4183, majf=0, minf=5 00:13:35.161 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.161 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.161 issued rwts: total=1280,1336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.161 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.161 job17: (groupid=0, jobs=1): err= 0: pid=85313: Tue Jul 23 22:15:07 2024 00:13:35.161 read: IOPS=134, BW=16.8MiB/s (17.6MB/s)(155MiB/9252msec) 00:13:35.161 slat (usec): min=5, max=1349, avg=48.47, stdev=108.42 00:13:35.161 clat (msec): min=2, max=161, avg= 9.83, stdev=12.76 00:13:35.161 lat (msec): min=2, max=161, avg= 9.87, stdev=12.76 00:13:35.161 clat percentiles (msec): 00:13:35.162 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 5], 00:13:35.162 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 9], 00:13:35.162 | 70.00th=[ 10], 80.00th=[ 12], 90.00th=[ 16], 95.00th=[ 19], 00:13:35.162 | 99.00th=[ 38], 99.50th=[ 148], 99.90th=[ 161], 99.95th=[ 161], 00:13:35.162 | 99.99th=[ 161] 00:13:35.162 write: IOPS=151, BW=18.9MiB/s (19.8MB/s)(160MiB/8457msec); 0 zone resets 00:13:35.162 slat (usec): min=31, max=2774, avg=126.71, stdev=178.38 00:13:35.162 clat (msec): min=11, max=174, avg=51.88, stdev=19.86 00:13:35.162 lat (msec): min=11, max=174, avg=52.00, stdev=19.87 00:13:35.162 clat percentiles (msec): 00:13:35.162 | 1.00th=[ 19], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 37], 00:13:35.162 | 30.00th=[ 40], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 53], 00:13:35.162 | 70.00th=[ 58], 80.00th=[ 65], 90.00th=[ 74], 95.00th=[ 88], 00:13:35.162 | 99.00th=[ 128], 99.50th=[ 142], 99.90th=[ 169], 99.95th=[ 176], 00:13:35.162 | 99.99th=[ 176] 00:13:35.162 bw ( KiB/s): min= 5888, max=28985, per=1.15%, avg=16597.11, stdev=6267.21, samples=19 00:13:35.162 iops : min= 46, max= 226, avg=129.53, stdev=48.91, samples=19 00:13:35.162 lat (msec) : 4=2.02%, 10=33.57%, 20=12.10%, 50=29.48%, 100=20.99% 00:13:35.162 lat (msec) : 250=1.83% 00:13:35.162 cpu : usr=0.95%, sys=0.39%, ctx=4105, majf=0, minf=1 00:13:35.162 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.162 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.162 issued rwts: total=1240,1280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.162 job18: (groupid=0, jobs=1): err= 0: pid=85314: Tue Jul 23 22:15:07 2024 00:13:35.162 read: IOPS=123, BW=15.4MiB/s (16.2MB/s)(140MiB/9064msec) 00:13:35.162 slat (usec): min=5, max=702, avg=37.97, stdev=63.00 00:13:35.162 clat (msec): min=2, max=168, avg=10.52, stdev=13.05 00:13:35.162 lat (msec): min=2, max=168, avg=10.56, stdev=13.05 00:13:35.162 clat percentiles (msec): 00:13:35.162 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 6], 00:13:35.162 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 9], 00:13:35.162 | 70.00th=[ 11], 80.00th=[ 13], 90.00th=[ 18], 95.00th=[ 22], 00:13:35.162 | 99.00th=[ 66], 99.50th=[ 120], 99.90th=[ 165], 99.95th=[ 169], 00:13:35.162 | 99.99th=[ 169] 00:13:35.162 write: IOPS=142, BW=17.9MiB/s (18.7MB/s)(152MiB/8514msec); 0 zone resets 00:13:35.162 slat (usec): min=37, max=5506, avg=139.18, stdev=289.30 00:13:35.162 clat (msec): min=12, max=230, avg=55.45, stdev=23.67 00:13:35.162 lat (msec): min=12, max=230, avg=55.59, stdev=23.66 00:13:35.162 clat percentiles (msec): 00:13:35.162 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 39], 00:13:35.162 | 30.00th=[ 42], 40.00th=[ 46], 50.00th=[ 51], 60.00th=[ 56], 00:13:35.162 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 82], 95.00th=[ 94], 00:13:35.162 | 99.00th=[ 157], 99.50th=[ 180], 99.90th=[ 222], 99.95th=[ 232], 00:13:35.162 | 99.99th=[ 232] 00:13:35.162 bw ( KiB/s): min= 5632, max=25600, per=1.06%, avg=15234.95, stdev=5610.96, samples=19 00:13:35.162 iops : min= 44, max= 200, avg=118.89, stdev=43.77, samples=19 00:13:35.162 lat (msec) : 4=2.44%, 10=30.52%, 20=11.82%, 50=28.08%, 100=24.87% 00:13:35.162 lat (msec) : 250=2.27% 00:13:35.162 cpu : usr=0.97%, sys=0.30%, ctx=3828, majf=0, minf=5 00:13:35.162 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.162 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.162 issued rwts: total=1120,1216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.162 job19: (groupid=0, jobs=1): err= 0: pid=85315: Tue Jul 23 22:15:07 2024 00:13:35.162 read: IOPS=130, BW=16.3MiB/s (17.1MB/s)(150MiB/9177msec) 00:13:35.162 slat (usec): min=5, max=1156, avg=42.69, stdev=91.09 00:13:35.162 clat (usec): min=2678, max=61083, avg=9010.47, stdev=6433.71 00:13:35.162 lat (usec): min=2721, max=61093, avg=9053.16, stdev=6433.64 00:13:35.162 clat percentiles (usec): 00:13:35.162 | 1.00th=[ 3064], 5.00th=[ 3490], 10.00th=[ 4359], 20.00th=[ 5211], 00:13:35.162 | 30.00th=[ 5800], 40.00th=[ 6456], 50.00th=[ 7439], 60.00th=[ 8291], 00:13:35.162 | 70.00th=[ 9372], 80.00th=[10945], 90.00th=[15401], 95.00th=[20317], 00:13:35.162 | 99.00th=[43779], 99.50th=[50070], 99.90th=[55313], 99.95th=[61080], 00:13:35.162 | 99.99th=[61080] 00:13:35.162 write: IOPS=148, BW=18.5MiB/s (19.4MB/s)(160MiB/8632msec); 0 zone resets 00:13:35.162 slat (usec): min=33, max=4658, avg=144.07, stdev=277.31 00:13:35.162 clat (msec): min=11, max=210, avg=53.36, stdev=22.48 00:13:35.162 lat (msec): min=11, max=210, avg=53.51, stdev=22.48 00:13:35.162 clat percentiles (msec): 00:13:35.162 | 1.00th=[ 13], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 37], 00:13:35.162 | 30.00th=[ 40], 40.00th=[ 45], 50.00th=[ 49], 60.00th=[ 53], 00:13:35.162 | 70.00th=[ 58], 80.00th=[ 66], 90.00th=[ 78], 95.00th=[ 93], 00:13:35.162 | 99.00th=[ 150], 99.50th=[ 167], 99.90th=[ 201], 99.95th=[ 211], 00:13:35.162 | 99.99th=[ 211] 00:13:35.162 bw ( KiB/s): min= 5888, max=28672, per=1.13%, avg=16319.74, stdev=6046.03, samples=19 00:13:35.162 iops : min= 46, max= 224, avg=127.42, stdev=47.21, samples=19 00:13:35.162 lat (msec) : 4=3.79%, 10=33.05%, 20=9.64%, 50=29.06%, 100=22.32% 00:13:35.162 lat (msec) : 250=2.14% 00:13:35.162 cpu : usr=0.84%, sys=0.45%, ctx=4031, majf=0, minf=5 00:13:35.162 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.162 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.162 issued rwts: total=1198,1280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.162 job20: (groupid=0, jobs=1): err= 0: pid=85316: Tue Jul 23 22:15:07 2024 00:13:35.162 read: IOPS=129, BW=16.1MiB/s (16.9MB/s)(140MiB/8671msec) 00:13:35.162 slat (usec): min=5, max=2240, avg=33.82, stdev=85.42 00:13:35.162 clat (msec): min=2, max=187, avg=11.76, stdev=23.53 00:13:35.162 lat (msec): min=2, max=187, avg=11.79, stdev=23.54 00:13:35.162 clat percentiles (msec): 00:13:35.162 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 5], 00:13:35.162 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 8], 00:13:35.162 | 70.00th=[ 9], 80.00th=[ 11], 90.00th=[ 16], 95.00th=[ 28], 00:13:35.162 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 188], 99.95th=[ 188], 00:13:35.162 | 99.99th=[ 188] 00:13:35.162 write: IOPS=138, BW=17.3MiB/s (18.1MB/s)(144MiB/8350msec); 0 zone resets 00:13:35.162 slat (usec): min=32, max=3937, avg=125.45, stdev=232.02 00:13:35.162 clat (msec): min=17, max=170, avg=57.41, stdev=21.45 00:13:35.162 lat (msec): min=17, max=170, avg=57.54, stdev=21.46 00:13:35.162 clat percentiles (msec): 00:13:35.162 | 1.00th=[ 33], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 41], 00:13:35.162 | 30.00th=[ 45], 40.00th=[ 50], 50.00th=[ 52], 60.00th=[ 57], 00:13:35.162 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 83], 95.00th=[ 101], 00:13:35.162 | 99.00th=[ 142], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:13:35.162 | 99.99th=[ 171] 00:13:35.162 bw ( KiB/s): min= 7936, max=24015, per=1.09%, avg=15748.94, stdev=4988.27, samples=18 00:13:35.162 iops : min= 62, max= 187, avg=122.83, stdev=38.98, samples=18 00:13:35.162 lat (msec) : 4=3.04%, 10=34.76%, 20=8.18%, 50=24.33%, 100=26.22% 00:13:35.162 lat (msec) : 250=3.48% 00:13:35.162 cpu : usr=0.76%, sys=0.39%, ctx=3584, majf=0, minf=4 00:13:35.162 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.162 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.162 issued rwts: total=1120,1153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.162 job21: (groupid=0, jobs=1): err= 0: pid=85317: Tue Jul 23 22:15:07 2024 00:13:35.162 read: IOPS=123, BW=15.4MiB/s (16.2MB/s)(146MiB/9446msec) 00:13:35.162 slat (usec): min=5, max=4373, avg=60.58, stdev=192.27 00:13:35.162 clat (usec): min=1864, max=74288, avg=10141.72, stdev=8176.51 00:13:35.162 lat (usec): min=1880, max=74294, avg=10202.30, stdev=8181.49 00:13:35.162 clat percentiles (usec): 00:13:35.162 | 1.00th=[ 3490], 5.00th=[ 4555], 10.00th=[ 5014], 20.00th=[ 5735], 00:13:35.162 | 30.00th=[ 6587], 40.00th=[ 7570], 50.00th=[ 8586], 60.00th=[ 9372], 00:13:35.162 | 70.00th=[10421], 80.00th=[12125], 90.00th=[14353], 95.00th=[19792], 00:13:35.162 | 99.00th=[60031], 99.50th=[71828], 99.90th=[73925], 99.95th=[73925], 00:13:35.162 | 99.99th=[73925] 00:13:35.162 write: IOPS=150, BW=18.8MiB/s (19.7MB/s)(160MiB/8502msec); 0 zone resets 00:13:35.162 slat (usec): min=28, max=3542, avg=130.05, stdev=253.26 00:13:35.162 clat (usec): min=1862, max=204973, avg=52629.40, stdev=23091.22 00:13:35.162 lat (msec): min=2, max=205, avg=52.76, stdev=23.10 00:13:35.162 clat percentiles (msec): 00:13:35.162 | 1.00th=[ 4], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 38], 00:13:35.162 | 30.00th=[ 42], 40.00th=[ 45], 50.00th=[ 49], 60.00th=[ 53], 00:13:35.162 | 70.00th=[ 58], 80.00th=[ 66], 90.00th=[ 78], 95.00th=[ 95], 00:13:35.162 | 99.00th=[ 132], 99.50th=[ 161], 99.90th=[ 194], 99.95th=[ 205], 00:13:35.162 | 99.99th=[ 205] 00:13:35.162 bw ( KiB/s): min= 5888, max=37194, per=1.14%, avg=16489.21, stdev=7877.23, samples=19 00:13:35.162 iops : min= 46, max= 290, avg=128.74, stdev=61.46, samples=19 00:13:35.162 lat (msec) : 2=0.08%, 4=1.76%, 10=32.04%, 20=13.98%, 50=27.38% 00:13:35.162 lat (msec) : 100=22.72%, 250=2.04% 00:13:35.162 cpu : usr=0.81%, sys=0.45%, ctx=3972, majf=0, minf=1 00:13:35.162 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.162 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.162 issued rwts: total=1167,1280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.162 job22: (groupid=0, jobs=1): err= 0: pid=85318: Tue Jul 23 22:15:07 2024 00:13:35.163 read: IOPS=121, BW=15.1MiB/s (15.9MB/s)(140MiB/9245msec) 00:13:35.163 slat (usec): min=5, max=1286, avg=56.53, stdev=113.82 00:13:35.163 clat (usec): min=3321, max=68488, avg=9679.05, stdev=7751.98 00:13:35.163 lat (usec): min=3357, max=68499, avg=9735.58, stdev=7747.62 00:13:35.163 clat percentiles (usec): 00:13:35.163 | 1.00th=[ 3687], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5735], 00:13:35.163 | 30.00th=[ 6194], 40.00th=[ 6783], 50.00th=[ 7373], 60.00th=[ 8848], 00:13:35.163 | 70.00th=[10028], 80.00th=[11338], 90.00th=[14353], 95.00th=[20841], 00:13:35.163 | 99.00th=[45351], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:13:35.163 | 99.99th=[68682] 00:13:35.163 write: IOPS=142, BW=17.8MiB/s (18.6MB/s)(154MiB/8683msec); 0 zone resets 00:13:35.163 slat (usec): min=36, max=58568, avg=194.10, stdev=1697.93 00:13:35.163 clat (msec): min=28, max=190, avg=55.64, stdev=20.73 00:13:35.163 lat (msec): min=28, max=190, avg=55.84, stdev=20.73 00:13:35.163 clat percentiles (msec): 00:13:35.163 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 40], 00:13:35.163 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 56], 00:13:35.163 | 70.00th=[ 62], 80.00th=[ 69], 90.00th=[ 79], 95.00th=[ 90], 00:13:35.163 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 174], 99.95th=[ 190], 00:13:35.163 | 99.99th=[ 190] 00:13:35.163 bw ( KiB/s): min= 6400, max=25138, per=1.09%, avg=15676.45, stdev=5347.86, samples=20 00:13:35.163 iops : min= 50, max= 196, avg=122.25, stdev=41.91, samples=20 00:13:35.163 lat (msec) : 4=0.89%, 10=32.64%, 20=11.60%, 50=27.28%, 100=25.80% 00:13:35.163 lat (msec) : 250=1.78% 00:13:35.163 cpu : usr=0.84%, sys=0.41%, ctx=3868, majf=0, minf=3 00:13:35.163 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.163 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.163 issued rwts: total=1120,1233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.163 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.163 job23: (groupid=0, jobs=1): err= 0: pid=85319: Tue Jul 23 22:15:07 2024 00:13:35.163 read: IOPS=123, BW=15.4MiB/s (16.2MB/s)(140MiB/9086msec) 00:13:35.163 slat (usec): min=4, max=1334, avg=46.58, stdev=88.07 00:13:35.163 clat (msec): min=2, max=239, avg=10.58, stdev=19.95 00:13:35.163 lat (msec): min=2, max=239, avg=10.62, stdev=19.95 00:13:35.163 clat percentiles (msec): 00:13:35.163 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 5], 00:13:35.163 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 8], 00:13:35.163 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 15], 95.00th=[ 20], 00:13:35.163 | 99.00th=[ 83], 99.50th=[ 224], 99.90th=[ 226], 99.95th=[ 241], 00:13:35.163 | 99.99th=[ 241] 00:13:35.163 write: IOPS=140, BW=17.6MiB/s (18.5MB/s)(150MiB/8519msec); 0 zone resets 00:13:35.163 slat (usec): min=34, max=2894, avg=130.71, stdev=204.70 00:13:35.163 clat (msec): min=14, max=195, avg=56.26, stdev=21.93 00:13:35.163 lat (msec): min=14, max=196, avg=56.40, stdev=21.92 00:13:35.163 clat percentiles (msec): 00:13:35.163 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 39], 00:13:35.163 | 30.00th=[ 43], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 57], 00:13:35.163 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 82], 95.00th=[ 99], 00:13:35.163 | 99.00th=[ 136], 99.50th=[ 146], 99.90th=[ 188], 99.95th=[ 197], 00:13:35.163 | 99.99th=[ 197] 00:13:35.163 bw ( KiB/s): min= 8175, max=27648, per=1.05%, avg=15125.74, stdev=5644.90, samples=19 00:13:35.163 iops : min= 63, max= 216, avg=118.00, stdev=44.21, samples=19 00:13:35.163 lat (msec) : 4=4.53%, 10=31.98%, 20=9.74%, 50=24.57%, 100=26.51% 00:13:35.163 lat (msec) : 250=2.67% 00:13:35.163 cpu : usr=0.84%, sys=0.37%, ctx=3856, majf=0, minf=5 00:13:35.163 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.163 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.163 issued rwts: total=1120,1200,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.163 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.163 job24: (groupid=0, jobs=1): err= 0: pid=85320: Tue Jul 23 22:15:07 2024 00:13:35.163 read: IOPS=126, BW=15.8MiB/s (16.6MB/s)(144MiB/9123msec) 00:13:35.163 slat (usec): min=5, max=789, avg=41.00, stdev=81.91 00:13:35.163 clat (usec): min=2572, max=74636, avg=8410.91, stdev=6795.04 00:13:35.163 lat (usec): min=2584, max=74774, avg=8451.91, stdev=6802.04 00:13:35.163 clat percentiles (usec): 00:13:35.163 | 1.00th=[ 3097], 5.00th=[ 3720], 10.00th=[ 4228], 20.00th=[ 4883], 00:13:35.163 | 30.00th=[ 5342], 40.00th=[ 5866], 50.00th=[ 6718], 60.00th=[ 7439], 00:13:35.163 | 70.00th=[ 8717], 80.00th=[10028], 90.00th=[14091], 95.00th=[17957], 00:13:35.163 | 99.00th=[35390], 99.50th=[57934], 99.90th=[74974], 99.95th=[74974], 00:13:35.163 | 99.99th=[74974] 00:13:35.163 write: IOPS=145, BW=18.2MiB/s (19.1MB/s)(160MiB/8771msec); 0 zone resets 00:13:35.163 slat (usec): min=28, max=8754, avg=130.39, stdev=327.31 00:13:35.163 clat (msec): min=4, max=175, avg=54.29, stdev=22.15 00:13:35.163 lat (msec): min=4, max=175, avg=54.42, stdev=22.14 00:13:35.163 clat percentiles (msec): 00:13:35.163 | 1.00th=[ 18], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 39], 00:13:35.163 | 30.00th=[ 42], 40.00th=[ 46], 50.00th=[ 50], 60.00th=[ 54], 00:13:35.163 | 70.00th=[ 58], 80.00th=[ 66], 90.00th=[ 78], 95.00th=[ 97], 00:13:35.163 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 176], 00:13:35.163 | 99.99th=[ 176] 00:13:35.163 bw ( KiB/s): min= 7168, max=25856, per=1.14%, avg=16352.00, stdev=5509.84, samples=19 00:13:35.163 iops : min= 56, max= 202, avg=127.63, stdev=43.12, samples=19 00:13:35.163 lat (msec) : 4=3.74%, 10=34.25%, 20=8.05%, 50=27.89%, 100=23.78% 00:13:35.163 lat (msec) : 250=2.30% 00:13:35.163 cpu : usr=0.86%, sys=0.36%, ctx=3944, majf=0, minf=7 00:13:35.163 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.163 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.163 issued rwts: total=1155,1280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.163 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.163 job25: (groupid=0, jobs=1): err= 0: pid=85321: Tue Jul 23 22:15:07 2024 00:13:35.163 read: IOPS=137, BW=17.2MiB/s (18.0MB/s)(160MiB/9296msec) 00:13:35.163 slat (usec): min=5, max=2361, avg=49.61, stdev=112.35 00:13:35.163 clat (usec): min=1509, max=80625, avg=8756.87, stdev=8201.79 00:13:35.163 lat (usec): min=2398, max=80636, avg=8806.48, stdev=8198.19 00:13:35.163 clat percentiles (usec): 00:13:35.163 | 1.00th=[ 3589], 5.00th=[ 3982], 10.00th=[ 4359], 20.00th=[ 5014], 00:13:35.163 | 30.00th=[ 5669], 40.00th=[ 6456], 50.00th=[ 7111], 60.00th=[ 7832], 00:13:35.163 | 70.00th=[ 8717], 80.00th=[ 9896], 90.00th=[11994], 95.00th=[15270], 00:13:35.163 | 99.00th=[64226], 99.50th=[67634], 99.90th=[80217], 99.95th=[80217], 00:13:35.163 | 99.99th=[80217] 00:13:35.163 write: IOPS=149, BW=18.7MiB/s (19.6MB/s)(161MiB/8638msec); 0 zone resets 00:13:35.163 slat (usec): min=36, max=57122, avg=169.14, stdev=1609.09 00:13:35.163 clat (usec): min=1335, max=170251, avg=53004.01, stdev=19778.89 00:13:35.163 lat (msec): min=2, max=170, avg=53.17, stdev=19.78 00:13:35.163 clat percentiles (msec): 00:13:35.163 | 1.00th=[ 5], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 39], 00:13:35.163 | 30.00th=[ 42], 40.00th=[ 46], 50.00th=[ 50], 60.00th=[ 54], 00:13:35.163 | 70.00th=[ 59], 80.00th=[ 66], 90.00th=[ 77], 95.00th=[ 88], 00:13:35.163 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 165], 99.95th=[ 171], 00:13:35.163 | 99.99th=[ 171] 00:13:35.163 bw ( KiB/s): min= 5632, max=27392, per=1.14%, avg=16417.20, stdev=5623.00, samples=20 00:13:35.163 iops : min= 44, max= 214, avg=128.15, stdev=43.92, samples=20 00:13:35.163 lat (msec) : 2=0.08%, 4=2.88%, 10=38.33%, 20=7.67%, 50=26.46% 00:13:35.163 lat (msec) : 100=23.27%, 250=1.32% 00:13:35.164 cpu : usr=0.94%, sys=0.38%, ctx=4079, majf=0, minf=5 00:13:35.164 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.164 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.164 issued rwts: total=1280,1290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.164 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.164 job26: (groupid=0, jobs=1): err= 0: pid=85322: Tue Jul 23 22:15:07 2024 00:13:35.164 read: IOPS=139, BW=17.4MiB/s (18.2MB/s)(160MiB/9195msec) 00:13:35.164 slat (usec): min=5, max=1666, avg=44.30, stdev=90.35 00:13:35.164 clat (usec): min=2353, max=24094, avg=8137.64, stdev=3304.41 00:13:35.164 lat (usec): min=2956, max=24109, avg=8181.94, stdev=3306.50 00:13:35.164 clat percentiles (usec): 00:13:35.164 | 1.00th=[ 3621], 5.00th=[ 4424], 10.00th=[ 4752], 20.00th=[ 5407], 00:13:35.164 | 30.00th=[ 6063], 40.00th=[ 6652], 50.00th=[ 7373], 60.00th=[ 8225], 00:13:35.164 | 70.00th=[ 9241], 80.00th=[10290], 90.00th=[12256], 95.00th=[14746], 00:13:35.164 | 99.00th=[20055], 99.50th=[21627], 99.90th=[23725], 99.95th=[23987], 00:13:35.164 | 99.99th=[23987] 00:13:35.164 write: IOPS=149, BW=18.7MiB/s (19.6MB/s)(163MiB/8707msec); 0 zone resets 00:13:35.164 slat (usec): min=32, max=13212, avg=150.29, stdev=449.80 00:13:35.164 clat (msec): min=22, max=153, avg=52.77, stdev=19.31 00:13:35.164 lat (msec): min=22, max=153, avg=52.92, stdev=19.29 00:13:35.164 clat percentiles (msec): 00:13:35.164 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 39], 00:13:35.164 | 30.00th=[ 42], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 52], 00:13:35.164 | 70.00th=[ 57], 80.00th=[ 64], 90.00th=[ 77], 95.00th=[ 97], 00:13:35.164 | 99.00th=[ 126], 99.50th=[ 130], 99.90th=[ 148], 99.95th=[ 155], 00:13:35.164 | 99.99th=[ 155] 00:13:35.164 bw ( KiB/s): min= 7424, max=26624, per=1.17%, avg=16909.47, stdev=5454.31, samples=19 00:13:35.164 iops : min= 58, max= 208, avg=132.11, stdev=42.61, samples=19 00:13:35.164 lat (msec) : 4=0.93%, 10=37.28%, 20=10.84%, 50=29.35%, 100=19.63% 00:13:35.164 lat (msec) : 250=1.97% 00:13:35.164 cpu : usr=0.79%, sys=0.55%, ctx=4201, majf=0, minf=5 00:13:35.164 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.164 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.164 issued rwts: total=1280,1303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.164 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.164 job27: (groupid=0, jobs=1): err= 0: pid=85323: Tue Jul 23 22:15:07 2024 00:13:35.164 read: IOPS=125, BW=15.7MiB/s (16.4MB/s)(140MiB/8929msec) 00:13:35.164 slat (usec): min=5, max=2209, avg=37.90, stdev=97.48 00:13:35.164 clat (usec): min=2731, max=99481, avg=12133.88, stdev=14433.12 00:13:35.164 lat (usec): min=2740, max=99494, avg=12171.79, stdev=14434.99 00:13:35.164 clat percentiles (usec): 00:13:35.164 | 1.00th=[ 3064], 5.00th=[ 3785], 10.00th=[ 4621], 20.00th=[ 5473], 00:13:35.164 | 30.00th=[ 6456], 40.00th=[ 7111], 50.00th=[ 7832], 60.00th=[ 9110], 00:13:35.164 | 70.00th=[10290], 80.00th=[12387], 90.00th=[18482], 95.00th=[45876], 00:13:35.164 | 99.00th=[76022], 99.50th=[93848], 99.90th=[96994], 99.95th=[99091], 00:13:35.164 | 99.99th=[99091] 00:13:35.164 write: IOPS=141, BW=17.7MiB/s (18.6MB/s)(147MiB/8295msec); 0 zone resets 00:13:35.164 slat (usec): min=32, max=3692, avg=140.07, stdev=260.58 00:13:35.164 clat (msec): min=12, max=159, avg=56.00, stdev=19.74 00:13:35.164 lat (msec): min=12, max=159, avg=56.14, stdev=19.74 00:13:35.164 clat percentiles (msec): 00:13:35.164 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 40], 00:13:35.164 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 57], 00:13:35.164 | 70.00th=[ 62], 80.00th=[ 70], 90.00th=[ 84], 95.00th=[ 94], 00:13:35.164 | 99.00th=[ 124], 99.50th=[ 134], 99.90th=[ 150], 99.95th=[ 161], 00:13:35.164 | 99.99th=[ 161] 00:13:35.164 bw ( KiB/s): min= 2039, max=25088, per=1.05%, avg=15084.16, stdev=6651.04, samples=19 00:13:35.164 iops : min= 15, max= 196, avg=117.68, stdev=52.09, samples=19 00:13:35.164 lat (msec) : 4=3.14%, 10=29.95%, 20=11.64%, 50=25.24%, 100=28.55% 00:13:35.164 lat (msec) : 250=1.48% 00:13:35.164 cpu : usr=0.70%, sys=0.43%, ctx=3762, majf=0, minf=1 00:13:35.164 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.164 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.164 issued rwts: total=1120,1174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.164 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.164 job28: (groupid=0, jobs=1): err= 0: pid=85324: Tue Jul 23 22:15:07 2024 00:13:35.164 read: IOPS=123, BW=15.4MiB/s (16.2MB/s)(140MiB/9075msec) 00:13:35.164 slat (usec): min=4, max=1697, avg=38.90, stdev=95.97 00:13:35.164 clat (usec): min=2967, max=54059, avg=8150.02, stdev=4930.63 00:13:35.164 lat (usec): min=2998, max=54072, avg=8188.91, stdev=4933.17 00:13:35.164 clat percentiles (usec): 00:13:35.164 | 1.00th=[ 3294], 5.00th=[ 3621], 10.00th=[ 3818], 20.00th=[ 4555], 00:13:35.164 | 30.00th=[ 5211], 40.00th=[ 5997], 50.00th=[ 6849], 60.00th=[ 7898], 00:13:35.164 | 70.00th=[ 9110], 80.00th=[10683], 90.00th=[14222], 95.00th=[16909], 00:13:35.164 | 99.00th=[25822], 99.50th=[30278], 99.90th=[49546], 99.95th=[54264], 00:13:35.164 | 99.99th=[54264] 00:13:35.164 write: IOPS=141, BW=17.7MiB/s (18.5MB/s)(157MiB/8866msec); 0 zone resets 00:13:35.164 slat (usec): min=27, max=2826, avg=134.70, stdev=215.55 00:13:35.164 clat (msec): min=8, max=247, avg=55.76, stdev=22.63 00:13:35.164 lat (msec): min=8, max=247, avg=55.90, stdev=22.63 00:13:35.164 clat percentiles (msec): 00:13:35.164 | 1.00th=[ 29], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 40], 00:13:35.164 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 51], 60.00th=[ 55], 00:13:35.164 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 81], 95.00th=[ 97], 00:13:35.164 | 99.00th=[ 134], 99.50th=[ 180], 99.90th=[ 234], 99.95th=[ 247], 00:13:35.164 | 99.99th=[ 247] 00:13:35.164 bw ( KiB/s): min= 7936, max=26112, per=1.10%, avg=15828.95, stdev=5324.04, samples=19 00:13:35.164 iops : min= 62, max= 204, avg=123.53, stdev=41.62, samples=19 00:13:35.164 lat (msec) : 4=5.98%, 10=30.41%, 20=9.86%, 50=27.04%, 100=24.56% 00:13:35.164 lat (msec) : 250=2.15% 00:13:35.164 cpu : usr=0.85%, sys=0.36%, ctx=3978, majf=0, minf=1 00:13:35.164 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.164 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.164 issued rwts: total=1120,1254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.164 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.164 job29: (groupid=0, jobs=1): err= 0: pid=85325: Tue Jul 23 22:15:07 2024 00:13:35.164 read: IOPS=125, BW=15.7MiB/s (16.4MB/s)(140MiB/8940msec) 00:13:35.164 slat (usec): min=5, max=783, avg=42.71, stdev=80.46 00:13:35.164 clat (msec): min=2, max=179, avg=13.34, stdev=22.78 00:13:35.164 lat (msec): min=2, max=179, avg=13.39, stdev=22.78 00:13:35.164 clat percentiles (msec): 00:13:35.164 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 6], 00:13:35.164 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 8], 00:13:35.164 | 70.00th=[ 10], 80.00th=[ 12], 90.00th=[ 20], 95.00th=[ 45], 00:13:35.164 | 99.00th=[ 140], 99.50th=[ 176], 99.90th=[ 180], 99.95th=[ 180], 00:13:35.164 | 99.99th=[ 180] 00:13:35.164 write: IOPS=143, BW=18.0MiB/s (18.8MB/s)(146MiB/8135msec); 0 zone resets 00:13:35.164 slat (usec): min=30, max=2526, avg=119.41, stdev=180.20 00:13:35.164 clat (msec): min=21, max=147, avg=55.18, stdev=18.88 00:13:35.164 lat (msec): min=21, max=147, avg=55.30, stdev=18.87 00:13:35.164 clat percentiles (msec): 00:13:35.164 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 40], 00:13:35.164 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 52], 60.00th=[ 55], 00:13:35.164 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 80], 95.00th=[ 94], 00:13:35.164 | 99.00th=[ 124], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 148], 00:13:35.164 | 99.99th=[ 148] 00:13:35.164 bw ( KiB/s): min= 2560, max=22528, per=1.05%, avg=15089.05, stdev=5835.56, samples=19 00:13:35.164 iops : min= 20, max= 176, avg=117.79, stdev=45.53, samples=19 00:13:35.164 lat (msec) : 4=2.93%, 10=33.29%, 20=7.82%, 50=26.39%, 100=26.87% 00:13:35.164 lat (msec) : 250=2.71% 00:13:35.164 cpu : usr=0.80%, sys=0.38%, ctx=3731, majf=0, minf=1 00:13:35.164 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.164 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.164 issued rwts: total=1120,1169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.164 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.164 job30: (groupid=0, jobs=1): err= 0: pid=85326: Tue Jul 23 22:15:07 2024 00:13:35.164 read: IOPS=89, BW=11.2MiB/s (11.8MB/s)(100MiB/8889msec) 00:13:35.164 slat (usec): min=6, max=2426, avg=46.46, stdev=117.22 00:13:35.164 clat (usec): min=5927, max=41149, avg=12487.29, stdev=5997.45 00:13:35.164 lat (usec): min=5948, max=41184, avg=12533.74, stdev=6001.32 00:13:35.164 clat percentiles (usec): 00:13:35.164 | 1.00th=[ 6194], 5.00th=[ 6652], 10.00th=[ 7111], 20.00th=[ 7898], 00:13:35.165 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10683], 60.00th=[11863], 00:13:35.165 | 70.00th=[13173], 80.00th=[16057], 90.00th=[20317], 95.00th=[26084], 00:13:35.165 | 99.00th=[32637], 99.50th=[35390], 99.90th=[41157], 99.95th=[41157], 00:13:35.165 | 99.99th=[41157] 00:13:35.165 write: IOPS=99, BW=12.5MiB/s (13.1MB/s)(109MiB/8769msec); 0 zone resets 00:13:35.165 slat (usec): min=35, max=3440, avg=138.40, stdev=239.31 00:13:35.165 clat (msec): min=29, max=280, avg=79.11, stdev=38.05 00:13:35.165 lat (msec): min=29, max=280, avg=79.24, stdev=38.05 00:13:35.165 clat percentiles (msec): 00:13:35.165 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 50], 20.00th=[ 52], 00:13:35.165 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 67], 60.00th=[ 74], 00:13:35.165 | 70.00th=[ 83], 80.00th=[ 99], 90.00th=[ 126], 95.00th=[ 155], 00:13:35.165 | 99.00th=[ 234], 99.50th=[ 251], 99.90th=[ 279], 99.95th=[ 279], 00:13:35.165 | 99.99th=[ 279] 00:13:35.165 bw ( KiB/s): min= 2554, max=19494, per=0.77%, avg=11087.32, stdev=5577.33, samples=19 00:13:35.165 iops : min= 19, max= 152, avg=86.42, stdev=43.67, samples=19 00:13:35.165 lat (msec) : 10=21.49%, 20=20.96%, 50=12.36%, 100=34.93%, 250=9.97% 00:13:35.165 lat (msec) : 500=0.30% 00:13:35.165 cpu : usr=0.58%, sys=0.30%, ctx=2691, majf=0, minf=5 00:13:35.165 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.165 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.165 issued rwts: total=800,875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.165 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.165 job31: (groupid=0, jobs=1): err= 0: pid=85327: Tue Jul 23 22:15:07 2024 00:13:35.165 read: IOPS=88, BW=11.0MiB/s (11.6MB/s)(100MiB/9058msec) 00:13:35.165 slat (usec): min=5, max=4983, avg=58.98, stdev=211.30 00:13:35.165 clat (usec): min=3382, max=61451, avg=11111.13, stdev=7146.94 00:13:35.165 lat (usec): min=3524, max=61465, avg=11170.11, stdev=7162.71 00:13:35.165 clat percentiles (usec): 00:13:35.165 | 1.00th=[ 3982], 5.00th=[ 5014], 10.00th=[ 6259], 20.00th=[ 7111], 00:13:35.165 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8717], 60.00th=[10028], 00:13:35.165 | 70.00th=[11863], 80.00th=[13829], 90.00th=[17957], 95.00th=[22152], 00:13:35.165 | 99.00th=[41157], 99.50th=[57410], 99.90th=[61604], 99.95th=[61604], 00:13:35.165 | 99.99th=[61604] 00:13:35.165 write: IOPS=101, BW=12.7MiB/s (13.3MB/s)(113MiB/8920msec); 0 zone resets 00:13:35.165 slat (usec): min=26, max=9518, avg=139.65, stdev=387.44 00:13:35.165 clat (msec): min=7, max=227, avg=78.28, stdev=33.43 00:13:35.165 lat (msec): min=7, max=227, avg=78.42, stdev=33.42 00:13:35.165 clat percentiles (msec): 00:13:35.165 | 1.00th=[ 13], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 52], 00:13:35.165 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 68], 60.00th=[ 79], 00:13:35.165 | 70.00th=[ 92], 80.00th=[ 107], 90.00th=[ 126], 95.00th=[ 146], 00:13:35.165 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 228], 99.95th=[ 228], 00:13:35.165 | 99.99th=[ 228] 00:13:35.165 bw ( KiB/s): min= 3328, max=22316, per=0.79%, avg=11379.47, stdev=5370.87, samples=19 00:13:35.165 iops : min= 26, max= 174, avg=88.63, stdev=41.79, samples=19 00:13:35.165 lat (msec) : 4=0.47%, 10=27.74%, 20=16.25%, 50=8.39%, 100=34.25% 00:13:35.165 lat (msec) : 250=12.90% 00:13:35.165 cpu : usr=0.59%, sys=0.29%, ctx=2758, majf=0, minf=3 00:13:35.165 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.165 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.165 issued rwts: total=800,905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.165 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.165 job32: (groupid=0, jobs=1): err= 0: pid=85328: Tue Jul 23 22:15:07 2024 00:13:35.165 read: IOPS=86, BW=10.8MiB/s (11.3MB/s)(100MiB/9277msec) 00:13:35.165 slat (usec): min=5, max=3118, avg=52.96, stdev=149.77 00:13:35.165 clat (usec): min=1690, max=223975, avg=12919.98, stdev=22765.78 00:13:35.165 lat (msec): min=3, max=223, avg=12.97, stdev=22.76 00:13:35.165 clat percentiles (msec): 00:13:35.165 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 7], 00:13:35.165 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 11], 00:13:35.165 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 15], 95.00th=[ 19], 00:13:35.165 | 99.00th=[ 86], 99.50th=[ 220], 99.90th=[ 224], 99.95th=[ 224], 00:13:35.165 | 99.99th=[ 224] 00:13:35.165 write: IOPS=104, BW=13.0MiB/s (13.6MB/s)(114MiB/8776msec); 0 zone resets 00:13:35.165 slat (usec): min=29, max=6318, avg=148.07, stdev=424.52 00:13:35.165 clat (usec): min=1193, max=269923, avg=76269.28, stdev=37719.64 00:13:35.165 lat (usec): min=1309, max=269983, avg=76417.35, stdev=37712.16 00:13:35.165 clat percentiles (msec): 00:13:35.165 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 50], 20.00th=[ 52], 00:13:35.165 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 69], 60.00th=[ 80], 00:13:35.165 | 70.00th=[ 92], 80.00th=[ 104], 90.00th=[ 128], 95.00th=[ 142], 00:13:35.165 | 99.00th=[ 184], 99.50th=[ 224], 99.90th=[ 271], 99.95th=[ 271], 00:13:35.165 | 99.99th=[ 271] 00:13:35.165 bw ( KiB/s): min= 2048, max=33335, per=0.80%, avg=11582.70, stdev=6883.67, samples=20 00:13:35.165 iops : min= 16, max= 260, avg=90.25, stdev=53.84, samples=20 00:13:35.165 lat (msec) : 2=0.12%, 4=0.58%, 10=30.12%, 20=17.69%, 50=4.61% 00:13:35.165 lat (msec) : 100=33.98%, 250=12.78%, 500=0.12% 00:13:35.165 cpu : usr=0.62%, sys=0.28%, ctx=2722, majf=0, minf=5 00:13:35.165 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.165 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.165 issued rwts: total=800,913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.165 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.165 job33: (groupid=0, jobs=1): err= 0: pid=85329: Tue Jul 23 22:15:07 2024 00:13:35.165 read: IOPS=90, BW=11.3MiB/s (11.9MB/s)(99.2MiB/8765msec) 00:13:35.165 slat (usec): min=5, max=1074, avg=48.87, stdev=87.74 00:13:35.165 clat (msec): min=3, max=289, avg=18.93, stdev=27.88 00:13:35.165 lat (msec): min=3, max=289, avg=18.98, stdev=27.88 00:13:35.165 clat percentiles (msec): 00:13:35.165 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:13:35.165 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 17], 00:13:35.165 | 70.00th=[ 19], 80.00th=[ 21], 90.00th=[ 31], 95.00th=[ 40], 00:13:35.165 | 99.00th=[ 241], 99.50th=[ 284], 99.90th=[ 292], 99.95th=[ 292], 00:13:35.165 | 99.99th=[ 292] 00:13:35.165 write: IOPS=98, BW=12.3MiB/s (12.9MB/s)(100MiB/8110msec); 0 zone resets 00:13:35.165 slat (usec): min=35, max=2119, avg=129.87, stdev=173.27 00:13:35.165 clat (msec): min=37, max=301, avg=80.31, stdev=41.65 00:13:35.165 lat (msec): min=37, max=301, avg=80.44, stdev=41.66 00:13:35.165 clat percentiles (msec): 00:13:35.165 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 53], 00:13:35.165 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 75], 00:13:35.165 | 70.00th=[ 84], 80.00th=[ 97], 90.00th=[ 112], 95.00th=[ 176], 00:13:35.165 | 99.00th=[ 255], 99.50th=[ 284], 99.90th=[ 300], 99.95th=[ 300], 00:13:35.165 | 99.99th=[ 300] 00:13:35.165 bw ( KiB/s): min= 256, max=18981, per=0.73%, avg=10468.47, stdev=5645.32, samples=19 00:13:35.165 iops : min= 2, max= 148, avg=81.68, stdev=44.07, samples=19 00:13:35.165 lat (msec) : 4=0.13%, 10=12.86%, 20=24.97%, 50=16.50%, 100=36.70% 00:13:35.165 lat (msec) : 250=7.72%, 500=1.13% 00:13:35.165 cpu : usr=0.61%, sys=0.28%, ctx=2683, majf=0, minf=9 00:13:35.165 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.165 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.165 issued rwts: total=794,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.165 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.165 job34: (groupid=0, jobs=1): err= 0: pid=85330: Tue Jul 23 22:15:07 2024 00:13:35.165 read: IOPS=89, BW=11.2MiB/s (11.8MB/s)(100MiB/8918msec) 00:13:35.165 slat (usec): min=5, max=754, avg=48.41, stdev=88.44 00:13:35.165 clat (usec): min=5139, max=49384, avg=12018.14, stdev=6265.14 00:13:35.165 lat (usec): min=5645, max=49399, avg=12066.55, stdev=6265.05 00:13:35.165 clat percentiles (usec): 00:13:35.165 | 1.00th=[ 6194], 5.00th=[ 6980], 10.00th=[ 7242], 20.00th=[ 7701], 00:13:35.165 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[11469], 00:13:35.165 | 70.00th=[12780], 80.00th=[14353], 90.00th=[20317], 95.00th=[25560], 00:13:35.165 | 99.00th=[38536], 99.50th=[43254], 99.90th=[49546], 99.95th=[49546], 00:13:35.165 | 99.99th=[49546] 00:13:35.165 write: IOPS=105, BW=13.2MiB/s (13.8MB/s)(116MiB/8823msec); 0 zone resets 00:13:35.165 slat (usec): min=30, max=10242, avg=151.54, stdev=403.89 00:13:35.165 clat (msec): min=7, max=255, avg=75.34, stdev=31.80 00:13:35.165 lat (msec): min=7, max=255, avg=75.49, stdev=31.79 00:13:35.165 clat percentiles (msec): 00:13:35.165 | 1.00th=[ 22], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:13:35.165 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 72], 00:13:35.165 | 70.00th=[ 83], 80.00th=[ 100], 90.00th=[ 110], 95.00th=[ 136], 00:13:35.165 | 99.00th=[ 201], 99.50th=[ 245], 99.90th=[ 255], 99.95th=[ 255], 00:13:35.165 | 99.99th=[ 255] 00:13:35.165 bw ( KiB/s): min= 3072, max=20736, per=0.81%, avg=11614.63, stdev=5198.06, samples=19 00:13:35.165 iops : min= 24, max= 162, avg=90.58, stdev=40.62, samples=19 00:13:35.165 lat (msec) : 10=23.93%, 20=17.63%, 50=10.40%, 100=37.98%, 250=9.94% 00:13:35.165 lat (msec) : 500=0.12% 00:13:35.165 cpu : usr=0.60%, sys=0.32%, ctx=2820, majf=0, minf=1 00:13:35.165 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.165 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.165 issued rwts: total=800,930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.165 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.165 job35: (groupid=0, jobs=1): err= 0: pid=85331: Tue Jul 23 22:15:07 2024 00:13:35.165 read: IOPS=79, BW=9.96MiB/s (10.4MB/s)(80.0MiB/8031msec) 00:13:35.165 slat (usec): min=5, max=1387, avg=47.20, stdev=102.03 00:13:35.165 clat (msec): min=2, max=141, avg=17.04, stdev=21.96 00:13:35.165 lat (msec): min=2, max=141, avg=17.08, stdev=21.96 00:13:35.165 clat percentiles (msec): 00:13:35.165 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:13:35.166 | 30.00th=[ 8], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:13:35.166 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 28], 95.00th=[ 64], 00:13:35.166 | 99.00th=[ 131], 99.50th=[ 138], 99.90th=[ 142], 99.95th=[ 142], 00:13:35.166 | 99.99th=[ 142] 00:13:35.166 write: IOPS=89, BW=11.2MiB/s (11.8MB/s)(97.1MiB/8666msec); 0 zone resets 00:13:35.166 slat (usec): min=36, max=10726, avg=157.14, stdev=455.90 00:13:35.166 clat (msec): min=45, max=298, avg=88.64, stdev=35.00 00:13:35.166 lat (msec): min=45, max=298, avg=88.80, stdev=35.01 00:13:35.166 clat percentiles (msec): 00:13:35.166 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 53], 20.00th=[ 59], 00:13:35.166 | 30.00th=[ 65], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 95], 00:13:35.166 | 70.00th=[ 103], 80.00th=[ 111], 90.00th=[ 133], 95.00th=[ 150], 00:13:35.166 | 99.00th=[ 199], 99.50th=[ 264], 99.90th=[ 300], 99.95th=[ 300], 00:13:35.166 | 99.99th=[ 300] 00:13:35.166 bw ( KiB/s): min= 2304, max=17408, per=0.66%, avg=9548.37, stdev=4128.20, samples=19 00:13:35.166 iops : min= 18, max= 136, avg=74.42, stdev=32.22, samples=19 00:13:35.166 lat (msec) : 4=0.28%, 10=21.67%, 20=15.81%, 50=7.27%, 100=35.92% 00:13:35.166 lat (msec) : 250=18.77%, 500=0.28% 00:13:35.166 cpu : usr=0.52%, sys=0.25%, ctx=2302, majf=0, minf=13 00:13:35.166 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.166 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.166 issued rwts: total=640,777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.166 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.166 job36: (groupid=0, jobs=1): err= 0: pid=85332: Tue Jul 23 22:15:07 2024 00:13:35.166 read: IOPS=90, BW=11.3MiB/s (11.9MB/s)(100MiB/8814msec) 00:13:35.166 slat (usec): min=5, max=1010, avg=58.34, stdev=113.37 00:13:35.166 clat (usec): min=5287, max=67876, avg=15134.65, stdev=8481.56 00:13:35.166 lat (usec): min=5696, max=67887, avg=15192.99, stdev=8480.18 00:13:35.166 clat percentiles (usec): 00:13:35.166 | 1.00th=[ 6390], 5.00th=[ 6980], 10.00th=[ 7701], 20.00th=[ 8979], 00:13:35.166 | 30.00th=[10028], 40.00th=[11076], 50.00th=[13304], 60.00th=[15270], 00:13:35.166 | 70.00th=[17695], 80.00th=[19530], 90.00th=[22938], 95.00th=[27919], 00:13:35.166 | 99.00th=[58459], 99.50th=[62129], 99.90th=[67634], 99.95th=[67634], 00:13:35.166 | 99.99th=[67634] 00:13:35.166 write: IOPS=103, BW=13.0MiB/s (13.6MB/s)(111MiB/8534msec); 0 zone resets 00:13:35.166 slat (usec): min=28, max=4482, avg=131.59, stdev=234.38 00:13:35.166 clat (msec): min=22, max=279, avg=76.45, stdev=35.10 00:13:35.166 lat (msec): min=23, max=279, avg=76.58, stdev=35.09 00:13:35.166 clat percentiles (msec): 00:13:35.166 | 1.00th=[ 27], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:13:35.166 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 73], 00:13:35.166 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 116], 95.00th=[ 153], 00:13:35.166 | 99.00th=[ 215], 99.50th=[ 236], 99.90th=[ 279], 99.95th=[ 279], 00:13:35.166 | 99.99th=[ 279] 00:13:35.166 bw ( KiB/s): min= 1792, max=18176, per=0.78%, avg=11225.90, stdev=5691.04, samples=20 00:13:35.166 iops : min= 14, max= 142, avg=87.60, stdev=44.38, samples=20 00:13:35.166 lat (msec) : 10=14.18%, 20=24.27%, 50=13.83%, 100=39.94%, 250=7.66% 00:13:35.166 lat (msec) : 500=0.12% 00:13:35.166 cpu : usr=0.69%, sys=0.21%, ctx=2810, majf=0, minf=5 00:13:35.166 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.166 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.166 issued rwts: total=800,885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.166 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.166 job37: (groupid=0, jobs=1): err= 0: pid=85333: Tue Jul 23 22:15:07 2024 00:13:35.166 read: IOPS=89, BW=11.2MiB/s (11.7MB/s)(97.2MiB/8695msec) 00:13:35.166 slat (usec): min=5, max=866, avg=62.37, stdev=115.73 00:13:35.166 clat (msec): min=3, max=201, avg=19.64, stdev=26.95 00:13:35.166 lat (msec): min=3, max=201, avg=19.70, stdev=26.95 00:13:35.166 clat percentiles (msec): 00:13:35.166 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:13:35.166 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 16], 00:13:35.166 | 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 31], 95.00th=[ 52], 00:13:35.166 | 99.00th=[ 194], 99.50th=[ 199], 99.90th=[ 203], 99.95th=[ 203], 00:13:35.166 | 99.99th=[ 203] 00:13:35.166 write: IOPS=99, BW=12.4MiB/s (13.0MB/s)(100MiB/8076msec); 0 zone resets 00:13:35.166 slat (usec): min=31, max=5213, avg=125.95, stdev=237.68 00:13:35.166 clat (msec): min=36, max=250, avg=79.95, stdev=31.27 00:13:35.166 lat (msec): min=39, max=250, avg=80.07, stdev=31.27 00:13:35.166 clat percentiles (msec): 00:13:35.166 | 1.00th=[ 45], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 54], 00:13:35.166 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 74], 60.00th=[ 83], 00:13:35.166 | 70.00th=[ 93], 80.00th=[ 101], 90.00th=[ 111], 95.00th=[ 136], 00:13:35.166 | 99.00th=[ 211], 99.50th=[ 228], 99.90th=[ 251], 99.95th=[ 251], 00:13:35.166 | 99.99th=[ 251] 00:13:35.166 bw ( KiB/s): min= 1280, max=18944, per=0.75%, avg=10789.50, stdev=4898.58, samples=18 00:13:35.166 iops : min= 10, max= 148, avg=84.11, stdev=38.25, samples=18 00:13:35.166 lat (msec) : 4=0.82%, 10=16.03%, 20=19.65%, 50=15.59%, 100=36.63% 00:13:35.166 lat (msec) : 250=11.22%, 500=0.06% 00:13:35.166 cpu : usr=0.59%, sys=0.24%, ctx=2707, majf=0, minf=5 00:13:35.166 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.166 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.166 issued rwts: total=778,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.166 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.166 job38: (groupid=0, jobs=1): err= 0: pid=85334: Tue Jul 23 22:15:07 2024 00:13:35.166 read: IOPS=90, BW=11.3MiB/s (11.9MB/s)(100MiB/8815msec) 00:13:35.166 slat (usec): min=5, max=630, avg=43.66, stdev=72.51 00:13:35.166 clat (usec): min=5946, max=43291, avg=14620.94, stdev=6065.66 00:13:35.166 lat (usec): min=5981, max=43311, avg=14664.60, stdev=6066.92 00:13:35.166 clat percentiles (usec): 00:13:35.166 | 1.00th=[ 6587], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 9372], 00:13:35.166 | 30.00th=[10552], 40.00th=[11469], 50.00th=[13698], 60.00th=[15401], 00:13:35.166 | 70.00th=[17957], 80.00th=[19792], 90.00th=[21103], 95.00th=[24511], 00:13:35.166 | 99.00th=[34866], 99.50th=[38536], 99.90th=[43254], 99.95th=[43254], 00:13:35.166 | 99.99th=[43254] 00:13:35.166 write: IOPS=101, BW=12.6MiB/s (13.3MB/s)(109MiB/8603msec); 0 zone resets 00:13:35.166 slat (usec): min=38, max=18747, avg=178.04, stdev=839.23 00:13:35.166 clat (msec): min=12, max=384, avg=78.34, stdev=43.73 00:13:35.166 lat (msec): min=12, max=384, avg=78.52, stdev=43.70 00:13:35.166 clat percentiles (msec): 00:13:35.166 | 1.00th=[ 13], 5.00th=[ 49], 10.00th=[ 51], 20.00th=[ 54], 00:13:35.166 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 73], 00:13:35.166 | 70.00th=[ 80], 80.00th=[ 91], 90.00th=[ 113], 95.00th=[ 161], 00:13:35.166 | 99.00th=[ 288], 99.50th=[ 300], 99.90th=[ 384], 99.95th=[ 384], 00:13:35.166 | 99.99th=[ 384] 00:13:35.166 bw ( KiB/s): min= 2304, max=17664, per=0.77%, avg=11042.60, stdev=5590.55, samples=20 00:13:35.166 iops : min= 18, max= 138, avg=86.15, stdev=43.66, samples=20 00:13:35.166 lat (msec) : 10=12.46%, 20=28.44%, 50=11.38%, 100=39.58%, 250=7.19% 00:13:35.166 lat (msec) : 500=0.96% 00:13:35.166 cpu : usr=0.60%, sys=0.31%, ctx=2774, majf=0, minf=7 00:13:35.166 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.166 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.166 issued rwts: total=800,870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.166 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.166 job39: (groupid=0, jobs=1): err= 0: pid=85335: Tue Jul 23 22:15:07 2024 00:13:35.166 read: IOPS=74, BW=9474KiB/s (9701kB/s)(80.0MiB/8647msec) 00:13:35.166 slat (usec): min=5, max=1239, avg=51.45, stdev=101.01 00:13:35.166 clat (msec): min=3, max=189, avg=20.82, stdev=25.00 00:13:35.166 lat (msec): min=3, max=189, avg=20.87, stdev=25.01 00:13:35.166 clat percentiles (msec): 00:13:35.166 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:13:35.166 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 17], 00:13:35.166 | 70.00th=[ 21], 80.00th=[ 25], 90.00th=[ 36], 95.00th=[ 77], 00:13:35.166 | 99.00th=[ 161], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 190], 00:13:35.166 | 99.99th=[ 190] 00:13:35.166 write: IOPS=92, BW=11.6MiB/s (12.2MB/s)(97.0MiB/8360msec); 0 zone resets 00:13:35.166 slat (usec): min=33, max=3582, avg=151.82, stdev=289.25 00:13:35.166 clat (msec): min=34, max=356, avg=85.48, stdev=35.88 00:13:35.166 lat (msec): min=35, max=356, avg=85.63, stdev=35.88 00:13:35.166 clat percentiles (msec): 00:13:35.166 | 1.00th=[ 41], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 58], 00:13:35.166 | 30.00th=[ 65], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 88], 00:13:35.166 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 117], 95.00th=[ 142], 00:13:35.166 | 99.00th=[ 249], 99.50th=[ 296], 99.90th=[ 359], 99.95th=[ 359], 00:13:35.166 | 99.99th=[ 359] 00:13:35.166 bw ( KiB/s): min= 2048, max=17186, per=0.66%, avg=9563.16, stdev=4754.22, samples=19 00:13:35.166 iops : min= 16, max= 134, avg=74.53, stdev=37.05, samples=19 00:13:35.166 lat (msec) : 4=0.21%, 10=15.11%, 20=16.10%, 50=14.48%, 100=39.12% 00:13:35.166 lat (msec) : 250=14.48%, 500=0.49% 00:13:35.166 cpu : usr=0.53%, sys=0.26%, ctx=2443, majf=0, minf=7 00:13:35.166 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.166 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.166 issued rwts: total=640,776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.166 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.166 job40: (groupid=0, jobs=1): err= 0: pid=85336: Tue Jul 23 22:15:07 2024 00:13:35.166 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(100MiB/8750msec) 00:13:35.166 slat (usec): min=5, max=1644, avg=54.65, stdev=118.32 00:13:35.166 clat (usec): min=3859, max=99601, avg=15312.24, stdev=11368.23 00:13:35.166 lat (usec): min=3940, max=99618, avg=15366.89, stdev=11373.92 00:13:35.166 clat percentiles (usec): 00:13:35.166 | 1.00th=[ 4490], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 8586], 00:13:35.167 | 30.00th=[10028], 40.00th=[11076], 50.00th=[12256], 60.00th=[13960], 00:13:35.167 | 70.00th=[17171], 80.00th=[19792], 90.00th=[23987], 95.00th=[29754], 00:13:35.167 | 99.00th=[67634], 99.50th=[83362], 99.90th=[99091], 99.95th=[99091], 00:13:35.167 | 99.99th=[99091] 00:13:35.167 write: IOPS=106, BW=13.3MiB/s (13.9MB/s)(113MiB/8493msec); 0 zone resets 00:13:35.167 slat (usec): min=35, max=2432, avg=129.09, stdev=175.47 00:13:35.167 clat (msec): min=22, max=318, avg=74.61, stdev=33.09 00:13:35.167 lat (msec): min=23, max=318, avg=74.74, stdev=33.08 00:13:35.167 clat percentiles (msec): 00:13:35.167 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 53], 00:13:35.167 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 71], 00:13:35.167 | 70.00th=[ 80], 80.00th=[ 91], 90.00th=[ 106], 95.00th=[ 131], 00:13:35.167 | 99.00th=[ 205], 99.50th=[ 236], 99.90th=[ 317], 99.95th=[ 317], 00:13:35.167 | 99.99th=[ 317] 00:13:35.167 bw ( KiB/s): min= 4352, max=18981, per=0.79%, avg=11355.79, stdev=5002.33, samples=19 00:13:35.167 iops : min= 34, max= 148, avg=88.53, stdev=39.12, samples=19 00:13:35.167 lat (msec) : 4=0.18%, 10=14.10%, 20=23.80%, 50=14.22%, 100=40.66% 00:13:35.167 lat (msec) : 250=6.87%, 500=0.18% 00:13:35.167 cpu : usr=0.58%, sys=0.33%, ctx=2871, majf=0, minf=1 00:13:35.167 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.167 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.167 issued rwts: total=800,902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.167 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.167 job41: (groupid=0, jobs=1): err= 0: pid=85337: Tue Jul 23 22:15:07 2024 00:13:35.167 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(90.0MiB/7885msec) 00:13:35.167 slat (usec): min=5, max=1177, avg=47.69, stdev=101.38 00:13:35.167 clat (msec): min=3, max=331, avg=17.72, stdev=36.54 00:13:35.167 lat (msec): min=3, max=331, avg=17.76, stdev=36.54 00:13:35.167 clat percentiles (msec): 00:13:35.167 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:13:35.167 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 11], 60.00th=[ 12], 00:13:35.167 | 70.00th=[ 13], 80.00th=[ 15], 90.00th=[ 23], 95.00th=[ 72], 00:13:35.167 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 334], 99.95th=[ 334], 00:13:35.167 | 99.99th=[ 334] 00:13:35.167 write: IOPS=95, BW=11.9MiB/s (12.5MB/s)(100MiB/8396msec); 0 zone resets 00:13:35.167 slat (usec): min=35, max=4154, avg=141.69, stdev=227.49 00:13:35.167 clat (msec): min=40, max=316, avg=83.41, stdev=36.08 00:13:35.167 lat (msec): min=41, max=316, avg=83.55, stdev=36.09 00:13:35.167 clat percentiles (msec): 00:13:35.167 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 53], 20.00th=[ 57], 00:13:35.167 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 74], 60.00th=[ 84], 00:13:35.167 | 70.00th=[ 94], 80.00th=[ 102], 90.00th=[ 124], 95.00th=[ 153], 00:13:35.167 | 99.00th=[ 236], 99.50th=[ 262], 99.90th=[ 317], 99.95th=[ 317], 00:13:35.167 | 99.99th=[ 317] 00:13:35.167 bw ( KiB/s): min= 3584, max=17593, per=0.72%, avg=10404.84, stdev=4598.89, samples=19 00:13:35.167 iops : min= 28, max= 137, avg=81.16, stdev=35.78, samples=19 00:13:35.167 lat (msec) : 4=1.25%, 10=21.97%, 20=18.62%, 50=6.51%, 100=39.74% 00:13:35.167 lat (msec) : 250=10.99%, 500=0.92% 00:13:35.167 cpu : usr=0.55%, sys=0.26%, ctx=2532, majf=0, minf=5 00:13:35.167 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.167 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.167 issued rwts: total=720,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.167 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.167 job42: (groupid=0, jobs=1): err= 0: pid=85338: Tue Jul 23 22:15:07 2024 00:13:35.167 read: IOPS=91, BW=11.4MiB/s (11.9MB/s)(100MiB/8780msec) 00:13:35.167 slat (usec): min=5, max=2176, avg=57.63, stdev=136.79 00:13:35.167 clat (usec): min=3288, max=37493, avg=13023.81, stdev=5687.47 00:13:35.167 lat (usec): min=4480, max=37502, avg=13081.44, stdev=5685.03 00:13:35.167 clat percentiles (usec): 00:13:35.167 | 1.00th=[ 5669], 5.00th=[ 6390], 10.00th=[ 7635], 20.00th=[ 8717], 00:13:35.167 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11600], 60.00th=[12780], 00:13:35.167 | 70.00th=[14484], 80.00th=[16319], 90.00th=[20317], 95.00th=[24511], 00:13:35.167 | 99.00th=[32113], 99.50th=[35390], 99.90th=[37487], 99.95th=[37487], 00:13:35.167 | 99.99th=[37487] 00:13:35.167 write: IOPS=104, BW=13.0MiB/s (13.7MB/s)(114MiB/8772msec); 0 zone resets 00:13:35.167 slat (usec): min=36, max=59907, avg=200.04, stdev=1991.06 00:13:35.167 clat (msec): min=4, max=304, avg=75.93, stdev=32.26 00:13:35.167 lat (msec): min=4, max=304, avg=76.13, stdev=32.23 00:13:35.167 clat percentiles (msec): 00:13:35.167 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 54], 00:13:35.167 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 73], 00:13:35.167 | 70.00th=[ 82], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 136], 00:13:35.167 | 99.00th=[ 209], 99.50th=[ 232], 99.90th=[ 305], 99.95th=[ 305], 00:13:35.167 | 99.99th=[ 305] 00:13:35.167 bw ( KiB/s): min= 2810, max=19456, per=0.81%, avg=11606.65, stdev=4735.95, samples=20 00:13:35.167 iops : min= 21, max= 152, avg=90.55, stdev=37.07, samples=20 00:13:35.167 lat (msec) : 4=0.06%, 10=16.22%, 20=25.26%, 50=11.14%, 100=39.56% 00:13:35.167 lat (msec) : 250=7.53%, 500=0.23% 00:13:35.167 cpu : usr=0.61%, sys=0.32%, ctx=2830, majf=0, minf=3 00:13:35.167 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.167 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.167 issued rwts: total=800,914,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.167 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.167 job43: (groupid=0, jobs=1): err= 0: pid=85339: Tue Jul 23 22:15:07 2024 00:13:35.167 read: IOPS=91, BW=11.4MiB/s (11.9MB/s)(100MiB/8780msec) 00:13:35.167 slat (usec): min=5, max=757, avg=43.83, stdev=84.14 00:13:35.167 clat (usec): min=4158, max=42245, avg=12282.17, stdev=5516.37 00:13:35.167 lat (usec): min=4250, max=42259, avg=12326.00, stdev=5514.53 00:13:35.167 clat percentiles (usec): 00:13:35.167 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6587], 20.00th=[ 8455], 00:13:35.167 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10945], 60.00th=[11863], 00:13:35.167 | 70.00th=[13698], 80.00th=[15401], 90.00th=[17957], 95.00th=[23200], 00:13:35.167 | 99.00th=[33817], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:13:35.167 | 99.99th=[42206] 00:13:35.167 write: IOPS=106, BW=13.3MiB/s (13.9MB/s)(117MiB/8801msec); 0 zone resets 00:13:35.167 slat (usec): min=36, max=4862, avg=155.31, stdev=333.81 00:13:35.167 clat (msec): min=18, max=212, avg=74.69, stdev=28.97 00:13:35.167 lat (msec): min=18, max=212, avg=74.85, stdev=28.97 00:13:35.167 clat percentiles (msec): 00:13:35.167 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 51], 20.00th=[ 54], 00:13:35.167 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 72], 00:13:35.167 | 70.00th=[ 82], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 130], 00:13:35.167 | 99.00th=[ 192], 99.50th=[ 209], 99.90th=[ 213], 99.95th=[ 213], 00:13:35.167 | 99.99th=[ 213] 00:13:35.167 bw ( KiB/s): min= 4343, max=18944, per=0.82%, avg=11760.26, stdev=4684.67, samples=19 00:13:35.167 iops : min= 33, max= 148, avg=91.79, stdev=36.64, samples=19 00:13:35.167 lat (msec) : 10=15.75%, 20=27.06%, 50=8.25%, 100=41.49%, 250=7.44% 00:13:35.167 cpu : usr=0.59%, sys=0.34%, ctx=2907, majf=0, minf=1 00:13:35.167 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.167 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.167 issued rwts: total=800,933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.167 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.167 job44: (groupid=0, jobs=1): err= 0: pid=85340: Tue Jul 23 22:15:07 2024 00:13:35.167 read: IOPS=88, BW=11.0MiB/s (11.5MB/s)(100MiB/9085msec) 00:13:35.167 slat (usec): min=5, max=907, avg=48.61, stdev=94.44 00:13:35.167 clat (usec): min=4503, max=96562, avg=14204.84, stdev=11349.16 00:13:35.167 lat (usec): min=4517, max=96630, avg=14253.44, stdev=11345.17 00:13:35.167 clat percentiles (usec): 00:13:35.167 | 1.00th=[ 5932], 5.00th=[ 7177], 10.00th=[ 8094], 20.00th=[ 9110], 00:13:35.167 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10683], 60.00th=[11863], 00:13:35.167 | 70.00th=[13960], 80.00th=[16188], 90.00th=[20317], 95.00th=[29230], 00:13:35.167 | 99.00th=[71828], 99.50th=[90702], 99.90th=[96994], 99.95th=[96994], 00:13:35.167 | 99.99th=[96994] 00:13:35.167 write: IOPS=110, BW=13.8MiB/s (14.5MB/s)(119MiB/8645msec); 0 zone resets 00:13:35.167 slat (usec): min=35, max=30955, avg=156.81, stdev=1008.89 00:13:35.167 clat (msec): min=2, max=252, avg=71.69, stdev=30.21 00:13:35.167 lat (msec): min=2, max=252, avg=71.85, stdev=30.16 00:13:35.167 clat percentiles (msec): 00:13:35.167 | 1.00th=[ 14], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 52], 00:13:35.167 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 69], 00:13:35.167 | 70.00th=[ 78], 80.00th=[ 87], 90.00th=[ 107], 95.00th=[ 129], 00:13:35.167 | 99.00th=[ 192], 99.50th=[ 207], 99.90th=[ 253], 99.95th=[ 253], 00:13:35.167 | 99.99th=[ 253] 00:13:35.167 bw ( KiB/s): min= 2048, max=23296, per=0.84%, avg=12118.60, stdev=5642.74, samples=20 00:13:35.167 iops : min= 16, max= 182, avg=94.60, stdev=44.15, samples=20 00:13:35.167 lat (msec) : 4=0.17%, 10=17.55%, 20=23.99%, 50=11.97%, 100=39.83% 00:13:35.167 lat (msec) : 250=6.44%, 500=0.06% 00:13:35.167 cpu : usr=0.63%, sys=0.31%, ctx=2812, majf=0, minf=3 00:13:35.167 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.167 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.167 issued rwts: total=800,955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.167 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.167 job45: (groupid=0, jobs=1): err= 0: pid=85341: Tue Jul 23 22:15:07 2024 00:13:35.167 read: IOPS=93, BW=11.7MiB/s (12.3MB/s)(100MiB/8523msec) 00:13:35.167 slat (usec): min=5, max=3284, avg=57.80, stdev=155.70 00:13:35.167 clat (usec): min=2981, max=78894, avg=11863.92, stdev=9408.62 00:13:35.167 lat (usec): min=3012, max=78967, avg=11921.72, stdev=9405.65 00:13:35.167 clat percentiles (usec): 00:13:35.167 | 1.00th=[ 3556], 5.00th=[ 4752], 10.00th=[ 5276], 20.00th=[ 6390], 00:13:35.167 | 30.00th=[ 7111], 40.00th=[ 8356], 50.00th=[ 9765], 60.00th=[10552], 00:13:35.167 | 70.00th=[11994], 80.00th=[13960], 90.00th=[19006], 95.00th=[25035], 00:13:35.168 | 99.00th=[55837], 99.50th=[58459], 99.90th=[79168], 99.95th=[79168], 00:13:35.168 | 99.99th=[79168] 00:13:35.168 write: IOPS=92, BW=11.5MiB/s (12.1MB/s)(102MiB/8834msec); 0 zone resets 00:13:35.168 slat (usec): min=37, max=5971, avg=152.38, stdev=343.11 00:13:35.168 clat (msec): min=28, max=328, avg=86.12, stdev=43.54 00:13:35.168 lat (msec): min=28, max=328, avg=86.27, stdev=43.55 00:13:35.168 clat percentiles (msec): 00:13:35.168 | 1.00th=[ 40], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 56], 00:13:35.168 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 73], 60.00th=[ 85], 00:13:35.168 | 70.00th=[ 92], 80.00th=[ 106], 90.00th=[ 132], 95.00th=[ 171], 00:13:35.168 | 99.00th=[ 271], 99.50th=[ 288], 99.90th=[ 330], 99.95th=[ 330], 00:13:35.168 | 99.99th=[ 330] 00:13:35.168 bw ( KiB/s): min= 2560, max=17884, per=0.73%, avg=10520.21, stdev=4752.46, samples=19 00:13:35.168 iops : min= 20, max= 139, avg=82.05, stdev=37.15, samples=19 00:13:35.168 lat (msec) : 4=0.62%, 10=25.90%, 20=18.90%, 50=7.00%, 100=35.25% 00:13:35.168 lat (msec) : 250=11.52%, 500=0.81% 00:13:35.168 cpu : usr=0.58%, sys=0.26%, ctx=2715, majf=0, minf=7 00:13:35.168 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.168 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.168 issued rwts: total=800,814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.168 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.168 job46: (groupid=0, jobs=1): err= 0: pid=85342: Tue Jul 23 22:15:07 2024 00:13:35.168 read: IOPS=87, BW=10.9MiB/s (11.5MB/s)(100MiB/9138msec) 00:13:35.168 slat (usec): min=5, max=2182, avg=48.04, stdev=114.42 00:13:35.168 clat (msec): min=3, max=141, avg=14.75, stdev=17.17 00:13:35.168 lat (msec): min=3, max=141, avg=14.80, stdev=17.17 00:13:35.168 clat percentiles (msec): 00:13:35.168 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:13:35.168 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 12], 60.00th=[ 12], 00:13:35.168 | 70.00th=[ 14], 80.00th=[ 16], 90.00th=[ 21], 95.00th=[ 26], 00:13:35.168 | 99.00th=[ 104], 99.50th=[ 136], 99.90th=[ 142], 99.95th=[ 142], 00:13:35.168 | 99.99th=[ 142] 00:13:35.168 write: IOPS=111, BW=13.9MiB/s (14.6MB/s)(119MiB/8576msec); 0 zone resets 00:13:35.168 slat (usec): min=28, max=6800, avg=156.81, stdev=363.16 00:13:35.168 clat (msec): min=3, max=216, avg=71.23, stdev=30.09 00:13:35.168 lat (msec): min=3, max=216, avg=71.38, stdev=30.12 00:13:35.168 clat percentiles (msec): 00:13:35.168 | 1.00th=[ 7], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 53], 00:13:35.168 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 70], 00:13:35.168 | 70.00th=[ 79], 80.00th=[ 90], 90.00th=[ 104], 95.00th=[ 134], 00:13:35.168 | 99.00th=[ 180], 99.50th=[ 199], 99.90th=[ 218], 99.95th=[ 218], 00:13:35.168 | 99.99th=[ 218] 00:13:35.168 bw ( KiB/s): min= 1024, max=25600, per=0.84%, avg=12129.05, stdev=5862.90, samples=20 00:13:35.168 iops : min= 8, max= 200, avg=94.60, stdev=45.85, samples=20 00:13:35.168 lat (msec) : 4=0.23%, 10=20.63%, 20=21.99%, 50=10.03%, 100=39.83% 00:13:35.168 lat (msec) : 250=7.29% 00:13:35.168 cpu : usr=0.69%, sys=0.26%, ctx=2889, majf=0, minf=1 00:13:35.168 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.168 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.168 issued rwts: total=800,955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.168 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.168 job47: (groupid=0, jobs=1): err= 0: pid=85343: Tue Jul 23 22:15:07 2024 00:13:35.168 read: IOPS=92, BW=11.6MiB/s (12.2MB/s)(100MiB/8616msec) 00:13:35.168 slat (usec): min=5, max=1621, avg=61.82, stdev=122.26 00:13:35.168 clat (msec): min=3, max=157, avg=17.41, stdev=20.03 00:13:35.168 lat (msec): min=3, max=157, avg=17.47, stdev=20.03 00:13:35.168 clat percentiles (msec): 00:13:35.168 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:13:35.168 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:13:35.168 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 29], 95.00th=[ 55], 00:13:35.168 | 99.00th=[ 117], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 159], 00:13:35.168 | 99.99th=[ 159] 00:13:35.168 write: IOPS=98, BW=12.3MiB/s (12.9MB/s)(102MiB/8306msec); 0 zone resets 00:13:35.168 slat (usec): min=31, max=63860, avg=209.51, stdev=2233.19 00:13:35.168 clat (msec): min=41, max=297, avg=80.11, stdev=31.68 00:13:35.168 lat (msec): min=41, max=297, avg=80.32, stdev=31.68 00:13:35.168 clat percentiles (msec): 00:13:35.168 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 54], 00:13:35.168 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 84], 00:13:35.169 | 70.00th=[ 92], 80.00th=[ 104], 90.00th=[ 121], 95.00th=[ 132], 00:13:35.169 | 99.00th=[ 205], 99.50th=[ 230], 99.90th=[ 296], 99.95th=[ 296], 00:13:35.169 | 99.99th=[ 296] 00:13:35.169 bw ( KiB/s): min= 1792, max=17408, per=0.72%, avg=10380.60, stdev=4387.03, samples=20 00:13:35.169 iops : min= 14, max= 136, avg=81.00, stdev=34.29, samples=20 00:13:35.169 lat (msec) : 4=0.06%, 10=17.91%, 20=21.80%, 50=13.59%, 100=34.59% 00:13:35.169 lat (msec) : 250=11.92%, 500=0.12% 00:13:35.169 cpu : usr=0.58%, sys=0.27%, ctx=2828, majf=0, minf=5 00:13:35.169 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.169 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.169 issued rwts: total=800,819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.169 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.169 job48: (groupid=0, jobs=1): err= 0: pid=85344: Tue Jul 23 22:15:07 2024 00:13:35.169 read: IOPS=97, BW=12.2MiB/s (12.7MB/s)(100MiB/8229msec) 00:13:35.169 slat (usec): min=5, max=1937, avg=58.97, stdev=142.70 00:13:35.169 clat (usec): min=2899, max=70494, avg=10195.48, stdev=7660.14 00:13:35.169 lat (usec): min=3269, max=70555, avg=10254.45, stdev=7665.37 00:13:35.169 clat percentiles (usec): 00:13:35.169 | 1.00th=[ 3621], 5.00th=[ 4359], 10.00th=[ 5014], 20.00th=[ 6587], 00:13:35.169 | 30.00th=[ 7111], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8455], 00:13:35.169 | 70.00th=[10028], 80.00th=[11469], 90.00th=[16581], 95.00th=[24249], 00:13:35.169 | 99.00th=[38011], 99.50th=[60556], 99.90th=[70779], 99.95th=[70779], 00:13:35.169 | 99.99th=[70779] 00:13:35.169 write: IOPS=96, BW=12.0MiB/s (12.6MB/s)(108MiB/9000msec); 0 zone resets 00:13:35.169 slat (usec): min=27, max=3190, avg=124.10, stdev=169.84 00:13:35.169 clat (msec): min=44, max=217, avg=82.70, stdev=28.70 00:13:35.169 lat (msec): min=44, max=217, avg=82.82, stdev=28.71 00:13:35.169 clat percentiles (msec): 00:13:35.169 | 1.00th=[ 48], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 58], 00:13:35.169 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 78], 60.00th=[ 86], 00:13:35.169 | 70.00th=[ 94], 80.00th=[ 104], 90.00th=[ 122], 95.00th=[ 140], 00:13:35.169 | 99.00th=[ 180], 99.50th=[ 192], 99.90th=[ 218], 99.95th=[ 218], 00:13:35.169 | 99.99th=[ 218] 00:13:35.169 bw ( KiB/s): min= 6656, max=17186, per=0.75%, avg=10846.32, stdev=3112.93, samples=19 00:13:35.169 iops : min= 52, max= 134, avg=84.63, stdev=24.27, samples=19 00:13:35.169 lat (msec) : 4=1.20%, 10=32.25%, 20=11.23%, 50=5.59%, 100=37.72% 00:13:35.169 lat (msec) : 250=12.01% 00:13:35.169 cpu : usr=0.58%, sys=0.30%, ctx=2743, majf=0, minf=5 00:13:35.169 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.169 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.169 issued rwts: total=800,865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.169 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.169 job49: (groupid=0, jobs=1): err= 0: pid=85346: Tue Jul 23 22:15:07 2024 00:13:35.169 read: IOPS=87, BW=10.9MiB/s (11.4MB/s)(100MiB/9183msec) 00:13:35.169 slat (usec): min=5, max=1407, avg=42.38, stdev=101.55 00:13:35.169 clat (usec): min=2974, max=82282, avg=12252.83, stdev=8249.83 00:13:35.169 lat (usec): min=3053, max=82401, avg=12295.21, stdev=8249.86 00:13:35.169 clat percentiles (usec): 00:13:35.169 | 1.00th=[ 4293], 5.00th=[ 6718], 10.00th=[ 7111], 20.00th=[ 8094], 00:13:35.169 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[11076], 00:13:35.169 | 70.00th=[12387], 80.00th=[13960], 90.00th=[19530], 95.00th=[23200], 00:13:35.169 | 99.00th=[44303], 99.50th=[74974], 99.90th=[82314], 99.95th=[82314], 00:13:35.169 | 99.99th=[82314] 00:13:35.169 write: IOPS=107, BW=13.4MiB/s (14.1MB/s)(119MiB/8835msec); 0 zone resets 00:13:35.169 slat (usec): min=28, max=9846, avg=137.62, stdev=366.30 00:13:35.169 clat (msec): min=2, max=296, avg=73.53, stdev=31.94 00:13:35.169 lat (msec): min=2, max=296, avg=73.67, stdev=31.94 00:13:35.169 clat percentiles (msec): 00:13:35.169 | 1.00th=[ 6], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 53], 00:13:35.169 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 74], 00:13:35.169 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 105], 95.00th=[ 133], 00:13:35.169 | 99.00th=[ 194], 99.50th=[ 239], 99.90th=[ 296], 99.95th=[ 296], 00:13:35.169 | 99.99th=[ 296] 00:13:35.169 bw ( KiB/s): min= 3072, max=26880, per=0.84%, avg=12043.10, stdev=5527.67, samples=20 00:13:35.169 iops : min= 24, max= 210, avg=94.00, stdev=43.13, samples=20 00:13:35.169 lat (msec) : 4=0.51%, 10=21.74%, 20=21.05%, 50=10.13%, 100=40.22% 00:13:35.169 lat (msec) : 250=6.24%, 500=0.11% 00:13:35.169 cpu : usr=0.62%, sys=0.30%, ctx=2853, majf=0, minf=1 00:13:35.169 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.169 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.169 issued rwts: total=800,948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.169 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.169 job50: (groupid=0, jobs=1): err= 0: pid=85351: Tue Jul 23 22:15:07 2024 00:13:35.169 read: IOPS=129, BW=16.1MiB/s (16.9MB/s)(140MiB/8678msec) 00:13:35.169 slat (usec): min=4, max=1210, avg=40.26, stdev=86.92 00:13:35.169 clat (usec): min=2030, max=50440, avg=7509.48, stdev=6830.51 00:13:35.169 lat (usec): min=2540, max=50449, avg=7549.74, stdev=6840.75 00:13:35.169 clat percentiles (usec): 00:13:35.169 | 1.00th=[ 2933], 5.00th=[ 3163], 10.00th=[ 3392], 20.00th=[ 4146], 00:13:35.169 | 30.00th=[ 4686], 40.00th=[ 5145], 50.00th=[ 5473], 60.00th=[ 5932], 00:13:35.169 | 70.00th=[ 6980], 80.00th=[ 8291], 90.00th=[11994], 95.00th=[18220], 00:13:35.169 | 99.00th=[41681], 99.50th=[46400], 99.90th=[50594], 99.95th=[50594], 00:13:35.169 | 99.99th=[50594] 00:13:35.169 write: IOPS=134, BW=16.8MiB/s (17.6MB/s)(150MiB/8935msec); 0 zone resets 00:13:35.169 slat (usec): min=34, max=4000, avg=132.77, stdev=212.49 00:13:35.169 clat (msec): min=8, max=170, avg=58.97, stdev=20.42 00:13:35.169 lat (msec): min=8, max=170, avg=59.11, stdev=20.43 00:13:35.169 clat percentiles (msec): 00:13:35.169 | 1.00th=[ 20], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 41], 00:13:35.169 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 63], 00:13:35.169 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 96], 00:13:35.169 | 99.00th=[ 130], 99.50th=[ 136], 99.90th=[ 155], 99.95th=[ 171], 00:13:35.169 | 99.99th=[ 171] 00:13:35.169 bw ( KiB/s): min= 9472, max=22784, per=1.05%, avg=15085.95, stdev=3899.27, samples=19 00:13:35.169 iops : min= 74, max= 178, avg=117.68, stdev=30.55, samples=19 00:13:35.169 lat (msec) : 4=8.78%, 10=32.59%, 20=5.17%, 50=20.58%, 100=30.74% 00:13:35.169 lat (msec) : 250=2.15% 00:13:35.169 cpu : usr=0.80%, sys=0.42%, ctx=3776, majf=0, minf=3 00:13:35.169 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.169 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.169 issued rwts: total=1120,1203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.169 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.169 job51: (groupid=0, jobs=1): err= 0: pid=85357: Tue Jul 23 22:15:07 2024 00:13:35.169 read: IOPS=122, BW=15.3MiB/s (16.1MB/s)(140MiB/9140msec) 00:13:35.169 slat (usec): min=5, max=1353, avg=46.40, stdev=100.33 00:13:35.169 clat (usec): min=2741, max=70535, avg=10896.99, stdev=9384.38 00:13:35.169 lat (usec): min=2863, max=70544, avg=10943.39, stdev=9384.95 00:13:35.169 clat percentiles (usec): 00:13:35.169 | 1.00th=[ 3228], 5.00th=[ 3851], 10.00th=[ 4424], 20.00th=[ 5211], 00:13:35.169 | 30.00th=[ 6128], 40.00th=[ 7242], 50.00th=[ 8586], 60.00th=[ 9372], 00:13:35.169 | 70.00th=[10421], 80.00th=[13829], 90.00th=[19268], 95.00th=[26870], 00:13:35.169 | 99.00th=[64226], 99.50th=[65799], 99.90th=[69731], 99.95th=[70779], 00:13:35.169 | 99.99th=[70779] 00:13:35.169 write: IOPS=149, BW=18.7MiB/s (19.6MB/s)(159MiB/8486msec); 0 zone resets 00:13:35.169 slat (usec): min=36, max=2399, avg=126.69, stdev=212.06 00:13:35.169 clat (msec): min=13, max=162, avg=53.00, stdev=20.93 00:13:35.169 lat (msec): min=13, max=162, avg=53.12, stdev=20.93 00:13:35.169 clat percentiles (msec): 00:13:35.169 | 1.00th=[ 31], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 37], 00:13:35.169 | 30.00th=[ 41], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 53], 00:13:35.169 | 70.00th=[ 57], 80.00th=[ 64], 90.00th=[ 78], 95.00th=[ 97], 00:13:35.169 | 99.00th=[ 129], 99.50th=[ 153], 99.90th=[ 163], 99.95th=[ 163], 00:13:35.169 | 99.99th=[ 163] 00:13:35.169 bw ( KiB/s): min= 5632, max=26880, per=1.12%, avg=16134.85, stdev=6344.07, samples=20 00:13:35.169 iops : min= 44, max= 210, avg=125.90, stdev=49.56, samples=20 00:13:35.169 lat (msec) : 4=3.10%, 10=28.10%, 20=12.06%, 50=31.70%, 100=22.49% 00:13:35.169 lat (msec) : 250=2.55% 00:13:35.169 cpu : usr=0.91%, sys=0.35%, ctx=3823, majf=0, minf=3 00:13:35.169 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.169 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.169 issued rwts: total=1120,1268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.169 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.169 job52: (groupid=0, jobs=1): err= 0: pid=85358: Tue Jul 23 22:15:07 2024 00:13:35.169 read: IOPS=124, BW=15.5MiB/s (16.3MB/s)(140MiB/9005msec) 00:13:35.169 slat (usec): min=5, max=1508, avg=39.44, stdev=85.95 00:13:35.169 clat (usec): min=2541, max=67961, avg=9324.64, stdev=8949.16 00:13:35.169 lat (usec): min=2764, max=67970, avg=9364.08, stdev=8944.94 00:13:35.169 clat percentiles (usec): 00:13:35.169 | 1.00th=[ 2966], 5.00th=[ 3425], 10.00th=[ 4293], 20.00th=[ 4948], 00:13:35.169 | 30.00th=[ 5342], 40.00th=[ 6063], 50.00th=[ 6783], 60.00th=[ 7701], 00:13:35.169 | 70.00th=[ 9110], 80.00th=[10814], 90.00th=[15401], 95.00th=[21890], 00:13:35.169 | 99.00th=[58983], 99.50th=[64750], 99.90th=[66847], 99.95th=[67634], 00:13:35.169 | 99.99th=[67634] 00:13:35.169 write: IOPS=139, BW=17.4MiB/s (18.2MB/s)(151MiB/8688msec); 0 zone resets 00:13:35.169 slat (usec): min=34, max=58536, avg=174.15, stdev=1695.91 00:13:35.169 clat (msec): min=5, max=163, avg=57.00, stdev=22.20 00:13:35.169 lat (msec): min=5, max=163, avg=57.17, stdev=22.21 00:13:35.169 clat percentiles (msec): 00:13:35.169 | 1.00th=[ 33], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 39], 00:13:35.170 | 30.00th=[ 43], 40.00th=[ 47], 50.00th=[ 52], 60.00th=[ 57], 00:13:35.170 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 102], 00:13:35.170 | 99.00th=[ 133], 99.50th=[ 146], 99.90th=[ 161], 99.95th=[ 163], 00:13:35.170 | 99.99th=[ 163] 00:13:35.170 bw ( KiB/s): min= 7936, max=23808, per=1.07%, avg=15355.50, stdev=4860.31, samples=20 00:13:35.170 iops : min= 62, max= 186, avg=119.85, stdev=37.99, samples=20 00:13:35.170 lat (msec) : 4=3.95%, 10=32.99%, 20=8.25%, 50=26.76%, 100=25.30% 00:13:35.170 lat (msec) : 250=2.75% 00:13:35.170 cpu : usr=0.79%, sys=0.39%, ctx=3892, majf=0, minf=5 00:13:35.170 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.170 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.170 issued rwts: total=1120,1208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.170 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.170 job53: (groupid=0, jobs=1): err= 0: pid=85360: Tue Jul 23 22:15:07 2024 00:13:35.170 read: IOPS=124, BW=15.6MiB/s (16.3MB/s)(140MiB/8996msec) 00:13:35.170 slat (usec): min=5, max=838, avg=42.82, stdev=79.68 00:13:35.170 clat (msec): min=2, max=126, avg= 9.03, stdev=12.31 00:13:35.170 lat (msec): min=2, max=126, avg= 9.07, stdev=12.31 00:13:35.170 clat percentiles (msec): 00:13:35.170 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 5], 00:13:35.170 | 30.00th=[ 5], 40.00th=[ 6], 50.00th=[ 7], 60.00th=[ 8], 00:13:35.170 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 13], 95.00th=[ 19], 00:13:35.170 | 99.00th=[ 89], 99.50th=[ 97], 99.90th=[ 110], 99.95th=[ 127], 00:13:35.170 | 99.99th=[ 127] 00:13:35.170 write: IOPS=137, BW=17.2MiB/s (18.1MB/s)(151MiB/8742msec); 0 zone resets 00:13:35.170 slat (usec): min=33, max=33491, avg=159.53, stdev=987.96 00:13:35.170 clat (msec): min=22, max=194, avg=57.40, stdev=21.54 00:13:35.170 lat (msec): min=23, max=194, avg=57.56, stdev=21.52 00:13:35.170 clat percentiles (msec): 00:13:35.170 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 40], 00:13:35.170 | 30.00th=[ 44], 40.00th=[ 49], 50.00th=[ 53], 60.00th=[ 59], 00:13:35.170 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 101], 00:13:35.170 | 99.00th=[ 134], 99.50th=[ 138], 99.90th=[ 161], 99.95th=[ 194], 00:13:35.170 | 99.99th=[ 194] 00:13:35.170 bw ( KiB/s): min= 5632, max=24832, per=1.07%, avg=15341.85, stdev=5111.38, samples=20 00:13:35.170 iops : min= 44, max= 194, avg=119.75, stdev=39.89, samples=20 00:13:35.170 lat (msec) : 4=7.31%, 10=32.67%, 20=6.41%, 50=24.03%, 100=26.83% 00:13:35.170 lat (msec) : 250=2.75% 00:13:35.170 cpu : usr=0.85%, sys=0.39%, ctx=3792, majf=0, minf=3 00:13:35.170 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.170 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.170 issued rwts: total=1120,1206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.170 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.170 job54: (groupid=0, jobs=1): err= 0: pid=85361: Tue Jul 23 22:15:07 2024 00:13:35.170 read: IOPS=139, BW=17.4MiB/s (18.3MB/s)(160MiB/9172msec) 00:13:35.170 slat (usec): min=5, max=735, avg=42.71, stdev=81.49 00:13:35.170 clat (usec): min=3256, max=72379, avg=10234.18, stdev=6898.06 00:13:35.170 lat (usec): min=3280, max=72468, avg=10276.89, stdev=6900.49 00:13:35.170 clat percentiles (usec): 00:13:35.170 | 1.00th=[ 3720], 5.00th=[ 4424], 10.00th=[ 4752], 20.00th=[ 5669], 00:13:35.170 | 30.00th=[ 6521], 40.00th=[ 7570], 50.00th=[ 8717], 60.00th=[ 9765], 00:13:35.170 | 70.00th=[11338], 80.00th=[12649], 90.00th=[17171], 95.00th=[21103], 00:13:35.170 | 99.00th=[32113], 99.50th=[63701], 99.90th=[70779], 99.95th=[72877], 00:13:35.170 | 99.99th=[72877] 00:13:35.170 write: IOPS=155, BW=19.4MiB/s (20.4MB/s)(162MiB/8359msec); 0 zone resets 00:13:35.170 slat (usec): min=31, max=4742, avg=117.33, stdev=197.97 00:13:35.170 clat (msec): min=8, max=177, avg=50.97, stdev=20.33 00:13:35.170 lat (msec): min=8, max=177, avg=51.09, stdev=20.33 00:13:35.170 clat percentiles (msec): 00:13:35.170 | 1.00th=[ 22], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 36], 00:13:35.170 | 30.00th=[ 40], 40.00th=[ 43], 50.00th=[ 46], 60.00th=[ 51], 00:13:35.170 | 70.00th=[ 56], 80.00th=[ 62], 90.00th=[ 72], 95.00th=[ 87], 00:13:35.170 | 99.00th=[ 138], 99.50th=[ 150], 99.90th=[ 174], 99.95th=[ 178], 00:13:35.170 | 99.99th=[ 178] 00:13:35.170 bw ( KiB/s): min= 5888, max=27392, per=1.17%, avg=16806.95, stdev=6655.95, samples=19 00:13:35.170 iops : min= 46, max= 214, avg=131.21, stdev=51.97, samples=19 00:13:35.170 lat (msec) : 4=1.05%, 10=29.66%, 20=16.09%, 50=32.73%, 100=18.92% 00:13:35.170 lat (msec) : 250=1.55% 00:13:35.170 cpu : usr=0.88%, sys=0.43%, ctx=4138, majf=0, minf=3 00:13:35.170 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.170 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.170 issued rwts: total=1280,1299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.170 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.170 job55: (groupid=0, jobs=1): err= 0: pid=85362: Tue Jul 23 22:15:07 2024 00:13:35.170 read: IOPS=125, BW=15.7MiB/s (16.5MB/s)(140MiB/8896msec) 00:13:35.170 slat (usec): min=5, max=1499, avg=45.92, stdev=110.69 00:13:35.170 clat (usec): min=1898, max=75985, avg=8500.67, stdev=8069.09 00:13:35.170 lat (usec): min=2158, max=76001, avg=8546.59, stdev=8064.30 00:13:35.170 clat percentiles (usec): 00:13:35.170 | 1.00th=[ 3163], 5.00th=[ 3621], 10.00th=[ 3949], 20.00th=[ 4490], 00:13:35.170 | 30.00th=[ 5211], 40.00th=[ 5932], 50.00th=[ 6390], 60.00th=[ 7373], 00:13:35.170 | 70.00th=[ 8455], 80.00th=[ 9503], 90.00th=[12518], 95.00th=[18744], 00:13:35.170 | 99.00th=[51643], 99.50th=[64226], 99.90th=[76022], 99.95th=[76022], 00:13:35.170 | 99.99th=[76022] 00:13:35.170 write: IOPS=141, BW=17.7MiB/s (18.6MB/s)(156MiB/8803msec); 0 zone resets 00:13:35.170 slat (usec): min=31, max=3356, avg=125.10, stdev=191.02 00:13:35.170 clat (msec): min=14, max=166, avg=56.04, stdev=20.35 00:13:35.170 lat (msec): min=15, max=166, avg=56.17, stdev=20.36 00:13:35.170 clat percentiles (msec): 00:13:35.170 | 1.00th=[ 28], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 40], 00:13:35.170 | 30.00th=[ 43], 40.00th=[ 47], 50.00th=[ 52], 60.00th=[ 58], 00:13:35.170 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 97], 00:13:35.170 | 99.00th=[ 125], 99.50th=[ 134], 99.90th=[ 157], 99.95th=[ 167], 00:13:35.170 | 99.99th=[ 167] 00:13:35.170 bw ( KiB/s): min= 8448, max=26112, per=1.09%, avg=15707.37, stdev=5110.89, samples=19 00:13:35.170 iops : min= 66, max= 204, avg=122.58, stdev=39.94, samples=19 00:13:35.170 lat (msec) : 2=0.04%, 4=5.37%, 10=33.71%, 20=6.38%, 50=26.40% 00:13:35.170 lat (msec) : 100=25.94%, 250=2.15% 00:13:35.170 cpu : usr=0.75%, sys=0.46%, ctx=3945, majf=0, minf=3 00:13:35.170 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.170 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.170 issued rwts: total=1120,1247,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.170 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.170 job56: (groupid=0, jobs=1): err= 0: pid=85363: Tue Jul 23 22:15:07 2024 00:13:35.170 read: IOPS=126, BW=15.8MiB/s (16.6MB/s)(148MiB/9326msec) 00:13:35.170 slat (usec): min=5, max=1663, avg=46.14, stdev=114.24 00:13:35.170 clat (usec): min=1939, max=84543, avg=10689.86, stdev=10028.70 00:13:35.170 lat (usec): min=1958, max=84558, avg=10735.99, stdev=10035.72 00:13:35.170 clat percentiles (usec): 00:13:35.170 | 1.00th=[ 3621], 5.00th=[ 4293], 10.00th=[ 4948], 20.00th=[ 5669], 00:13:35.170 | 30.00th=[ 6259], 40.00th=[ 7046], 50.00th=[ 8029], 60.00th=[ 9241], 00:13:35.170 | 70.00th=[10552], 80.00th=[12649], 90.00th=[16450], 95.00th=[23462], 00:13:35.170 | 99.00th=[67634], 99.50th=[74974], 99.90th=[82314], 99.95th=[84411], 00:13:35.170 | 99.99th=[84411] 00:13:35.170 write: IOPS=152, BW=19.0MiB/s (20.0MB/s)(160MiB/8409msec); 0 zone resets 00:13:35.170 slat (usec): min=34, max=10103, avg=142.28, stdev=371.85 00:13:35.170 clat (msec): min=4, max=241, avg=52.04, stdev=23.57 00:13:35.170 lat (msec): min=4, max=241, avg=52.18, stdev=23.58 00:13:35.170 clat percentiles (msec): 00:13:35.170 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 37], 00:13:35.170 | 30.00th=[ 41], 40.00th=[ 44], 50.00th=[ 47], 60.00th=[ 51], 00:13:35.170 | 70.00th=[ 56], 80.00th=[ 63], 90.00th=[ 75], 95.00th=[ 91], 00:13:35.170 | 99.00th=[ 155], 99.50th=[ 174], 99.90th=[ 241], 99.95th=[ 243], 00:13:35.170 | 99.99th=[ 243] 00:13:35.170 bw ( KiB/s): min= 4864, max=32512, per=1.14%, avg=16380.35, stdev=7075.95, samples=20 00:13:35.170 iops : min= 38, max= 254, avg=127.85, stdev=55.29, samples=20 00:13:35.170 lat (msec) : 2=0.04%, 4=1.02%, 10=31.08%, 20=14.38%, 50=30.48% 00:13:35.170 lat (msec) : 100=21.05%, 250=1.95% 00:13:35.170 cpu : usr=0.90%, sys=0.42%, ctx=3994, majf=0, minf=3 00:13:35.170 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.170 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.170 issued rwts: total=1181,1280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.170 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.170 job57: (groupid=0, jobs=1): err= 0: pid=85364: Tue Jul 23 22:15:07 2024 00:13:35.170 read: IOPS=140, BW=17.6MiB/s (18.5MB/s)(160MiB/9086msec) 00:13:35.170 slat (usec): min=5, max=1774, avg=43.95, stdev=102.71 00:13:35.170 clat (usec): min=2463, max=68182, avg=9165.36, stdev=7082.19 00:13:35.170 lat (usec): min=2734, max=68198, avg=9209.30, stdev=7085.72 00:13:35.170 clat percentiles (usec): 00:13:35.170 | 1.00th=[ 2966], 5.00th=[ 3654], 10.00th=[ 4424], 20.00th=[ 5211], 00:13:35.170 | 30.00th=[ 5932], 40.00th=[ 6849], 50.00th=[ 7898], 60.00th=[ 8586], 00:13:35.170 | 70.00th=[ 9372], 80.00th=[10683], 90.00th=[14353], 95.00th=[19006], 00:13:35.170 | 99.00th=[50594], 99.50th=[59507], 99.90th=[67634], 99.95th=[68682], 00:13:35.170 | 99.99th=[68682] 00:13:35.170 write: IOPS=152, BW=19.0MiB/s (19.9MB/s)(162MiB/8526msec); 0 zone resets 00:13:35.170 slat (usec): min=35, max=2914, avg=120.51, stdev=192.79 00:13:35.170 clat (msec): min=19, max=171, avg=52.07, stdev=21.08 00:13:35.170 lat (msec): min=19, max=171, avg=52.19, stdev=21.08 00:13:35.170 clat percentiles (msec): 00:13:35.170 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 37], 00:13:35.171 | 30.00th=[ 40], 40.00th=[ 43], 50.00th=[ 46], 60.00th=[ 51], 00:13:35.171 | 70.00th=[ 56], 80.00th=[ 65], 90.00th=[ 78], 95.00th=[ 91], 00:13:35.171 | 99.00th=[ 140], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 171], 00:13:35.171 | 99.99th=[ 171] 00:13:35.171 bw ( KiB/s): min= 6144, max=26112, per=1.17%, avg=16843.63, stdev=6237.84, samples=19 00:13:35.171 iops : min= 48, max= 204, avg=131.53, stdev=48.72, samples=19 00:13:35.171 lat (msec) : 4=3.42%, 10=34.51%, 20=9.67%, 50=31.29%, 100=19.14% 00:13:35.171 lat (msec) : 250=1.98% 00:13:35.171 cpu : usr=0.74%, sys=0.57%, ctx=4155, majf=0, minf=1 00:13:35.171 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.171 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.171 issued rwts: total=1280,1296,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.171 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.171 job58: (groupid=0, jobs=1): err= 0: pid=85365: Tue Jul 23 22:15:07 2024 00:13:35.171 read: IOPS=128, BW=16.1MiB/s (16.8MB/s)(140MiB/8716msec) 00:13:35.171 slat (usec): min=5, max=1453, avg=42.35, stdev=94.17 00:13:35.171 clat (usec): min=2283, max=88221, avg=7670.46, stdev=9608.42 00:13:35.171 lat (usec): min=2298, max=88319, avg=7712.80, stdev=9611.49 00:13:35.171 clat percentiles (usec): 00:13:35.171 | 1.00th=[ 2769], 5.00th=[ 3097], 10.00th=[ 3261], 20.00th=[ 3720], 00:13:35.171 | 30.00th=[ 4047], 40.00th=[ 4424], 50.00th=[ 5145], 60.00th=[ 6063], 00:13:35.171 | 70.00th=[ 7111], 80.00th=[ 8848], 90.00th=[12125], 95.00th=[18220], 00:13:35.171 | 99.00th=[70779], 99.50th=[83362], 99.90th=[87557], 99.95th=[88605], 00:13:35.171 | 99.99th=[88605] 00:13:35.171 write: IOPS=136, BW=17.1MiB/s (18.0MB/s)(153MiB/8908msec); 0 zone resets 00:13:35.171 slat (usec): min=35, max=6233, avg=126.97, stdev=261.03 00:13:35.171 clat (msec): min=8, max=151, avg=58.02, stdev=19.25 00:13:35.171 lat (msec): min=8, max=151, avg=58.15, stdev=19.25 00:13:35.171 clat percentiles (msec): 00:13:35.171 | 1.00th=[ 26], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 41], 00:13:35.171 | 30.00th=[ 45], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 62], 00:13:35.171 | 70.00th=[ 67], 80.00th=[ 73], 90.00th=[ 83], 95.00th=[ 94], 00:13:35.171 | 99.00th=[ 113], 99.50th=[ 117], 99.90th=[ 140], 99.95th=[ 153], 00:13:35.171 | 99.99th=[ 153] 00:13:35.171 bw ( KiB/s): min= 7936, max=21504, per=1.06%, avg=15251.26, stdev=4045.51, samples=19 00:13:35.171 iops : min= 62, max= 168, avg=119.05, stdev=31.57, samples=19 00:13:35.171 lat (msec) : 4=13.50%, 10=27.69%, 20=5.13%, 50=20.90%, 100=30.98% 00:13:35.171 lat (msec) : 250=1.79% 00:13:35.171 cpu : usr=0.92%, sys=0.29%, ctx=3824, majf=0, minf=1 00:13:35.171 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.171 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.171 issued rwts: total=1120,1220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.171 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.171 job59: (groupid=0, jobs=1): err= 0: pid=85366: Tue Jul 23 22:15:07 2024 00:13:35.171 read: IOPS=136, BW=17.1MiB/s (17.9MB/s)(160MiB/9352msec) 00:13:35.171 slat (usec): min=5, max=4300, avg=48.21, stdev=162.01 00:13:35.171 clat (msec): min=2, max=137, avg=10.42, stdev=12.53 00:13:35.171 lat (msec): min=2, max=137, avg=10.46, stdev=12.53 00:13:35.171 clat percentiles (msec): 00:13:35.171 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 6], 00:13:35.171 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 9], 00:13:35.171 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 17], 95.00th=[ 25], 00:13:35.171 | 99.00th=[ 61], 99.50th=[ 132], 99.90th=[ 138], 99.95th=[ 138], 00:13:35.171 | 99.99th=[ 138] 00:13:35.171 write: IOPS=155, BW=19.5MiB/s (20.4MB/s)(162MiB/8348msec); 0 zone resets 00:13:35.171 slat (usec): min=34, max=4288, avg=142.21, stdev=281.64 00:13:35.171 clat (msec): min=9, max=173, avg=50.86, stdev=19.99 00:13:35.171 lat (msec): min=9, max=173, avg=51.00, stdev=19.99 00:13:35.171 clat percentiles (msec): 00:13:35.171 | 1.00th=[ 11], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 37], 00:13:35.171 | 30.00th=[ 39], 40.00th=[ 43], 50.00th=[ 46], 60.00th=[ 51], 00:13:35.171 | 70.00th=[ 56], 80.00th=[ 64], 90.00th=[ 75], 95.00th=[ 90], 00:13:35.171 | 99.00th=[ 130], 99.50th=[ 138], 99.90th=[ 163], 99.95th=[ 174], 00:13:35.171 | 99.99th=[ 174] 00:13:35.171 bw ( KiB/s): min= 4608, max=31488, per=1.15%, avg=16532.65, stdev=7024.14, samples=20 00:13:35.171 iops : min= 36, max= 246, avg=129.05, stdev=54.90, samples=20 00:13:35.171 lat (msec) : 4=3.02%, 10=32.03%, 20=12.45%, 50=31.60%, 100=19.39% 00:13:35.171 lat (msec) : 250=1.51% 00:13:35.171 cpu : usr=0.92%, sys=0.37%, ctx=4191, majf=0, minf=5 00:13:35.171 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.171 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.171 issued rwts: total=1280,1299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.171 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.171 job60: (groupid=0, jobs=1): err= 0: pid=85367: Tue Jul 23 22:15:07 2024 00:13:35.171 read: IOPS=131, BW=16.4MiB/s (17.2MB/s)(149MiB/9071msec) 00:13:35.171 slat (usec): min=5, max=1702, avg=44.15, stdev=104.16 00:13:35.171 clat (usec): min=2409, max=74799, avg=9220.34, stdev=6434.33 00:13:35.171 lat (usec): min=2427, max=74814, avg=9264.48, stdev=6430.96 00:13:35.171 clat percentiles (usec): 00:13:35.171 | 1.00th=[ 3130], 5.00th=[ 3785], 10.00th=[ 4621], 20.00th=[ 5735], 00:13:35.171 | 30.00th=[ 6259], 40.00th=[ 6915], 50.00th=[ 7570], 60.00th=[ 8291], 00:13:35.171 | 70.00th=[ 9896], 80.00th=[11076], 90.00th=[14353], 95.00th=[20055], 00:13:35.171 | 99.00th=[42730], 99.50th=[51119], 99.90th=[57934], 99.95th=[74974], 00:13:35.171 | 99.99th=[74974] 00:13:35.171 write: IOPS=148, BW=18.6MiB/s (19.5MB/s)(160MiB/8609msec); 0 zone resets 00:13:35.171 slat (usec): min=33, max=2815, avg=126.95, stdev=201.21 00:13:35.171 clat (msec): min=26, max=181, avg=53.25, stdev=21.22 00:13:35.171 lat (msec): min=26, max=181, avg=53.38, stdev=21.23 00:13:35.171 clat percentiles (msec): 00:13:35.171 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 37], 00:13:35.171 | 30.00th=[ 41], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 52], 00:13:35.171 | 70.00th=[ 58], 80.00th=[ 65], 90.00th=[ 81], 95.00th=[ 95], 00:13:35.171 | 99.00th=[ 140], 99.50th=[ 146], 99.90th=[ 180], 99.95th=[ 182], 00:13:35.171 | 99.99th=[ 182] 00:13:35.171 bw ( KiB/s): min= 5888, max=26624, per=1.14%, avg=16462.95, stdev=6352.95, samples=19 00:13:35.171 iops : min= 46, max= 208, avg=128.47, stdev=49.73, samples=19 00:13:35.171 lat (msec) : 4=3.00%, 10=30.85%, 20=11.90%, 50=31.05%, 100=21.13% 00:13:35.171 lat (msec) : 250=2.06% 00:13:35.171 cpu : usr=0.92%, sys=0.37%, ctx=3926, majf=0, minf=3 00:13:35.171 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.171 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.171 issued rwts: total=1190,1280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.171 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.171 job61: (groupid=0, jobs=1): err= 0: pid=85368: Tue Jul 23 22:15:07 2024 00:13:35.171 read: IOPS=124, BW=15.5MiB/s (16.3MB/s)(140MiB/9013msec) 00:13:35.171 slat (usec): min=5, max=1185, avg=45.28, stdev=98.88 00:13:35.171 clat (usec): min=2091, max=83337, avg=9441.93, stdev=8372.82 00:13:35.171 lat (usec): min=2138, max=83345, avg=9487.21, stdev=8371.36 00:13:35.171 clat percentiles (usec): 00:13:35.171 | 1.00th=[ 2933], 5.00th=[ 3621], 10.00th=[ 4228], 20.00th=[ 5145], 00:13:35.171 | 30.00th=[ 5604], 40.00th=[ 6259], 50.00th=[ 7046], 60.00th=[ 7832], 00:13:35.171 | 70.00th=[ 9372], 80.00th=[10945], 90.00th=[16909], 95.00th=[23987], 00:13:35.171 | 99.00th=[41157], 99.50th=[72877], 99.90th=[80217], 99.95th=[83362], 00:13:35.171 | 99.99th=[83362] 00:13:35.171 write: IOPS=138, BW=17.3MiB/s (18.1MB/s)(150MiB/8676msec); 0 zone resets 00:13:35.171 slat (usec): min=33, max=5176, avg=132.67, stdev=258.46 00:13:35.171 clat (msec): min=19, max=193, avg=57.40, stdev=20.20 00:13:35.171 lat (msec): min=19, max=193, avg=57.53, stdev=20.20 00:13:35.171 clat percentiles (msec): 00:13:35.171 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 40], 00:13:35.171 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 60], 00:13:35.171 | 70.00th=[ 65], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 93], 00:13:35.171 | 99.00th=[ 125], 99.50th=[ 138], 99.90th=[ 161], 99.95th=[ 194], 00:13:35.171 | 99.99th=[ 194] 00:13:35.171 bw ( KiB/s): min= 8704, max=26164, per=1.05%, avg=15159.37, stdev=5022.10, samples=19 00:13:35.171 iops : min= 68, max= 204, avg=118.32, stdev=39.17, samples=19 00:13:35.171 lat (msec) : 4=4.01%, 10=32.31%, 20=8.71%, 50=25.93%, 100=27.14% 00:13:35.171 lat (msec) : 250=1.90% 00:13:35.171 cpu : usr=0.81%, sys=0.38%, ctx=3780, majf=0, minf=1 00:13:35.171 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.171 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.171 issued rwts: total=1120,1198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.171 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.171 job62: (groupid=0, jobs=1): err= 0: pid=85369: Tue Jul 23 22:15:07 2024 00:13:35.171 read: IOPS=120, BW=15.1MiB/s (15.8MB/s)(140MiB/9298msec) 00:13:35.171 slat (usec): min=5, max=3551, avg=56.89, stdev=201.09 00:13:35.171 clat (msec): min=2, max=137, avg=11.11, stdev=12.72 00:13:35.171 lat (msec): min=2, max=137, avg=11.16, stdev=12.71 00:13:35.171 clat percentiles (msec): 00:13:35.171 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:13:35.171 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 9], 60.00th=[ 10], 00:13:35.171 | 70.00th=[ 11], 80.00th=[ 13], 90.00th=[ 19], 95.00th=[ 24], 00:13:35.171 | 99.00th=[ 63], 99.50th=[ 128], 99.90th=[ 138], 99.95th=[ 138], 00:13:35.171 | 99.99th=[ 138] 00:13:35.171 write: IOPS=146, BW=18.4MiB/s (19.3MB/s)(155MiB/8463msec); 0 zone resets 00:13:35.171 slat (usec): min=32, max=5987, avg=138.95, stdev=284.67 00:13:35.171 clat (msec): min=8, max=222, avg=53.97, stdev=24.37 00:13:35.171 lat (msec): min=8, max=222, avg=54.11, stdev=24.37 00:13:35.171 clat percentiles (msec): 00:13:35.171 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 38], 00:13:35.171 | 30.00th=[ 41], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 52], 00:13:35.172 | 70.00th=[ 58], 80.00th=[ 66], 90.00th=[ 82], 95.00th=[ 97], 00:13:35.172 | 99.00th=[ 157], 99.50th=[ 180], 99.90th=[ 218], 99.95th=[ 224], 00:13:35.172 | 99.99th=[ 224] 00:13:35.172 bw ( KiB/s): min= 4864, max=29755, per=1.10%, avg=15821.10, stdev=6890.35, samples=20 00:13:35.172 iops : min= 38, max= 232, avg=123.50, stdev=53.77, samples=20 00:13:35.172 lat (msec) : 4=0.59%, 10=30.72%, 20=12.31%, 50=33.39%, 100=20.19% 00:13:35.172 lat (msec) : 250=2.79% 00:13:35.172 cpu : usr=0.79%, sys=0.43%, ctx=3814, majf=0, minf=5 00:13:35.172 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.172 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.172 issued rwts: total=1120,1243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.172 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.172 job63: (groupid=0, jobs=1): err= 0: pid=85370: Tue Jul 23 22:15:07 2024 00:13:35.172 read: IOPS=130, BW=16.3MiB/s (17.0MB/s)(140MiB/8611msec) 00:13:35.172 slat (usec): min=5, max=501, avg=39.17, stdev=67.80 00:13:35.172 clat (usec): min=2145, max=49390, avg=6727.97, stdev=5449.43 00:13:35.172 lat (usec): min=2297, max=49399, avg=6767.14, stdev=5447.80 00:13:35.172 clat percentiles (usec): 00:13:35.172 | 1.00th=[ 2638], 5.00th=[ 2999], 10.00th=[ 3228], 20.00th=[ 3720], 00:13:35.172 | 30.00th=[ 4113], 40.00th=[ 4621], 50.00th=[ 5211], 60.00th=[ 5669], 00:13:35.172 | 70.00th=[ 6652], 80.00th=[ 8094], 90.00th=[11076], 95.00th=[15008], 00:13:35.172 | 99.00th=[37487], 99.50th=[39584], 99.90th=[45876], 99.95th=[49546], 00:13:35.172 | 99.99th=[49546] 00:13:35.172 write: IOPS=130, BW=16.3MiB/s (17.0MB/s)(147MiB/9051msec); 0 zone resets 00:13:35.172 slat (usec): min=31, max=5394, avg=129.53, stdev=233.49 00:13:35.172 clat (msec): min=21, max=178, avg=61.06, stdev=21.71 00:13:35.172 lat (msec): min=21, max=178, avg=61.19, stdev=21.71 00:13:35.172 clat percentiles (msec): 00:13:35.172 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 43], 00:13:35.172 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 63], 00:13:35.172 | 70.00th=[ 70], 80.00th=[ 78], 90.00th=[ 92], 95.00th=[ 104], 00:13:35.172 | 99.00th=[ 128], 99.50th=[ 142], 99.90th=[ 171], 99.95th=[ 180], 00:13:35.172 | 99.99th=[ 180] 00:13:35.172 bw ( KiB/s): min= 8924, max=20224, per=1.05%, avg=15080.47, stdev=3347.51, samples=19 00:13:35.172 iops : min= 69, max= 158, avg=117.58, stdev=26.10, samples=19 00:13:35.172 lat (msec) : 4=12.97%, 10=29.87%, 20=4.44%, 50=21.55%, 100=28.25% 00:13:35.172 lat (msec) : 250=2.92% 00:13:35.172 cpu : usr=0.76%, sys=0.41%, ctx=3788, majf=0, minf=8 00:13:35.172 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.172 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.172 issued rwts: total=1120,1177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.172 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.172 job64: (groupid=0, jobs=1): err= 0: pid=85371: Tue Jul 23 22:15:07 2024 00:13:35.172 read: IOPS=125, BW=15.7MiB/s (16.5MB/s)(140MiB/8916msec) 00:13:35.172 slat (usec): min=4, max=1676, avg=45.99, stdev=112.70 00:13:35.172 clat (usec): min=1657, max=256281, avg=10462.69, stdev=21657.94 00:13:35.172 lat (msec): min=2, max=256, avg=10.51, stdev=21.67 00:13:35.172 clat percentiles (msec): 00:13:35.172 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 4], 20.00th=[ 5], 00:13:35.172 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 7], 60.00th=[ 8], 00:13:35.172 | 70.00th=[ 10], 80.00th=[ 12], 90.00th=[ 17], 95.00th=[ 23], 00:13:35.172 | 99.00th=[ 62], 99.50th=[ 251], 99.90th=[ 257], 99.95th=[ 257], 00:13:35.172 | 99.99th=[ 257] 00:13:35.172 write: IOPS=138, BW=17.3MiB/s (18.2MB/s)(148MiB/8545msec); 0 zone resets 00:13:35.172 slat (usec): min=30, max=2836, avg=125.41, stdev=196.37 00:13:35.172 clat (msec): min=30, max=177, avg=57.11, stdev=22.26 00:13:35.172 lat (msec): min=30, max=178, avg=57.23, stdev=22.27 00:13:35.172 clat percentiles (msec): 00:13:35.172 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 39], 00:13:35.172 | 30.00th=[ 43], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 59], 00:13:35.172 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 99], 00:13:35.172 | 99.00th=[ 140], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 178], 00:13:35.172 | 99.99th=[ 178] 00:13:35.172 bw ( KiB/s): min= 1792, max=26308, per=1.05%, avg=15072.00, stdev=6051.47, samples=19 00:13:35.172 iops : min= 14, max= 205, avg=117.68, stdev=47.20, samples=19 00:13:35.172 lat (msec) : 2=0.04%, 4=7.33%, 10=28.66%, 20=9.02%, 50=26.93% 00:13:35.172 lat (msec) : 100=25.46%, 250=2.30%, 500=0.26% 00:13:35.172 cpu : usr=0.76%, sys=0.41%, ctx=3768, majf=0, minf=1 00:13:35.172 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.172 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.172 issued rwts: total=1120,1186,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.172 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.172 job65: (groupid=0, jobs=1): err= 0: pid=85372: Tue Jul 23 22:15:07 2024 00:13:35.172 read: IOPS=122, BW=15.4MiB/s (16.1MB/s)(140MiB/9110msec) 00:13:35.172 slat (usec): min=5, max=1688, avg=49.68, stdev=124.76 00:13:35.172 clat (msec): min=2, max=177, avg=12.03, stdev=15.83 00:13:35.172 lat (msec): min=2, max=177, avg=12.08, stdev=15.83 00:13:35.172 clat percentiles (msec): 00:13:35.172 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 6], 00:13:35.172 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:13:35.172 | 70.00th=[ 11], 80.00th=[ 14], 90.00th=[ 19], 95.00th=[ 31], 00:13:35.172 | 99.00th=[ 103], 99.50th=[ 112], 99.90th=[ 178], 99.95th=[ 178], 00:13:35.172 | 99.99th=[ 178] 00:13:35.172 write: IOPS=152, BW=19.0MiB/s (19.9MB/s)(156MiB/8227msec); 0 zone resets 00:13:35.172 slat (usec): min=27, max=4393, avg=121.33, stdev=208.19 00:13:35.172 clat (msec): min=10, max=250, avg=52.13, stdev=21.46 00:13:35.172 lat (msec): min=10, max=250, avg=52.25, stdev=21.46 00:13:35.172 clat percentiles (msec): 00:13:35.172 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 37], 00:13:35.172 | 30.00th=[ 41], 40.00th=[ 43], 50.00th=[ 47], 60.00th=[ 52], 00:13:35.172 | 70.00th=[ 57], 80.00th=[ 65], 90.00th=[ 74], 95.00th=[ 89], 00:13:35.172 | 99.00th=[ 132], 99.50th=[ 176], 99.90th=[ 228], 99.95th=[ 251], 00:13:35.172 | 99.99th=[ 251] 00:13:35.172 bw ( KiB/s): min= 2816, max=27959, per=1.11%, avg=15920.55, stdev=6367.05, samples=20 00:13:35.172 iops : min= 22, max= 218, avg=124.10, stdev=49.74, samples=20 00:13:35.172 lat (msec) : 4=2.40%, 10=28.30%, 20=12.70%, 50=33.07%, 100=21.34% 00:13:35.172 lat (msec) : 250=2.15%, 500=0.04% 00:13:35.172 cpu : usr=0.87%, sys=0.35%, ctx=3844, majf=0, minf=5 00:13:35.172 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.172 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.172 issued rwts: total=1120,1251,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.172 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.172 job66: (groupid=0, jobs=1): err= 0: pid=85373: Tue Jul 23 22:15:07 2024 00:13:35.172 read: IOPS=127, BW=15.9MiB/s (16.7MB/s)(140MiB/8808msec) 00:13:35.172 slat (usec): min=5, max=1228, avg=47.68, stdev=101.93 00:13:35.172 clat (msec): min=2, max=146, avg= 8.88, stdev=12.99 00:13:35.172 lat (msec): min=2, max=146, avg= 8.93, stdev=12.99 00:13:35.172 clat percentiles (msec): 00:13:35.172 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 4], 00:13:35.172 | 30.00th=[ 5], 40.00th=[ 6], 50.00th=[ 7], 60.00th=[ 7], 00:13:35.172 | 70.00th=[ 8], 80.00th=[ 10], 90.00th=[ 15], 95.00th=[ 21], 00:13:35.172 | 99.00th=[ 57], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 146], 00:13:35.172 | 99.99th=[ 146] 00:13:35.172 write: IOPS=135, BW=17.0MiB/s (17.8MB/s)(149MiB/8762msec); 0 zone resets 00:13:35.172 slat (usec): min=27, max=3745, avg=130.40, stdev=189.40 00:13:35.172 clat (msec): min=24, max=205, avg=58.36, stdev=21.35 00:13:35.172 lat (msec): min=25, max=205, avg=58.49, stdev=21.36 00:13:35.172 clat percentiles (msec): 00:13:35.172 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 41], 00:13:35.172 | 30.00th=[ 45], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 61], 00:13:35.172 | 70.00th=[ 66], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 97], 00:13:35.172 | 99.00th=[ 130], 99.50th=[ 161], 99.90th=[ 180], 99.95th=[ 207], 00:13:35.172 | 99.99th=[ 207] 00:13:35.172 bw ( KiB/s): min= 5120, max=24064, per=1.05%, avg=15086.89, stdev=5348.50, samples=19 00:13:35.172 iops : min= 40, max= 188, avg=117.79, stdev=41.76, samples=19 00:13:35.172 lat (msec) : 4=12.46%, 10=27.00%, 20=6.23%, 50=24.49%, 100=27.48% 00:13:35.172 lat (msec) : 250=2.34% 00:13:35.172 cpu : usr=0.91%, sys=0.33%, ctx=3761, majf=0, minf=3 00:13:35.172 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.172 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.172 issued rwts: total=1120,1191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.172 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.172 job67: (groupid=0, jobs=1): err= 0: pid=85374: Tue Jul 23 22:15:07 2024 00:13:35.172 read: IOPS=138, BW=17.3MiB/s (18.1MB/s)(160MiB/9249msec) 00:13:35.172 slat (usec): min=5, max=2395, avg=51.56, stdev=122.59 00:13:35.172 clat (usec): min=1920, max=94972, avg=8857.53, stdev=8583.72 00:13:35.172 lat (usec): min=2124, max=94986, avg=8909.09, stdev=8584.11 00:13:35.172 clat percentiles (usec): 00:13:35.172 | 1.00th=[ 2900], 5.00th=[ 3621], 10.00th=[ 4015], 20.00th=[ 4490], 00:13:35.172 | 30.00th=[ 5211], 40.00th=[ 5735], 50.00th=[ 6652], 60.00th=[ 8225], 00:13:35.172 | 70.00th=[ 9372], 80.00th=[10552], 90.00th=[14091], 95.00th=[17695], 00:13:35.172 | 99.00th=[57934], 99.50th=[66847], 99.90th=[90702], 99.95th=[94897], 00:13:35.172 | 99.99th=[94897] 00:13:35.172 write: IOPS=149, BW=18.7MiB/s (19.6MB/s)(160MiB/8559msec); 0 zone resets 00:13:35.172 slat (usec): min=27, max=9125, avg=129.22, stdev=310.89 00:13:35.172 clat (msec): min=13, max=174, avg=53.02, stdev=23.16 00:13:35.172 lat (msec): min=13, max=174, avg=53.15, stdev=23.16 00:13:35.172 clat percentiles (msec): 00:13:35.172 | 1.00th=[ 30], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 36], 00:13:35.172 | 30.00th=[ 39], 40.00th=[ 42], 50.00th=[ 45], 60.00th=[ 49], 00:13:35.172 | 70.00th=[ 59], 80.00th=[ 69], 90.00th=[ 86], 95.00th=[ 99], 00:13:35.173 | 99.00th=[ 142], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 176], 00:13:35.173 | 99.99th=[ 176] 00:13:35.173 bw ( KiB/s): min= 3072, max=29696, per=1.15%, avg=16636.00, stdev=7192.58, samples=19 00:13:35.173 iops : min= 24, max= 232, avg=129.84, stdev=56.19, samples=19 00:13:35.173 lat (msec) : 2=0.04%, 4=4.77%, 10=33.48%, 20=10.16%, 50=31.88% 00:13:35.173 lat (msec) : 100=17.42%, 250=2.27% 00:13:35.173 cpu : usr=0.86%, sys=0.41%, ctx=4138, majf=0, minf=1 00:13:35.173 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.173 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.173 issued rwts: total=1280,1280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.173 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.173 job68: (groupid=0, jobs=1): err= 0: pid=85375: Tue Jul 23 22:15:07 2024 00:13:35.173 read: IOPS=129, BW=16.1MiB/s (16.9MB/s)(140MiB/8671msec) 00:13:35.173 slat (usec): min=5, max=1386, avg=44.48, stdev=102.58 00:13:35.173 clat (msec): min=2, max=406, avg=11.56, stdev=33.01 00:13:35.173 lat (msec): min=2, max=406, avg=11.60, stdev=33.02 00:13:35.173 clat percentiles (msec): 00:13:35.173 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 5], 00:13:35.173 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 7], 60.00th=[ 8], 00:13:35.173 | 70.00th=[ 9], 80.00th=[ 11], 90.00th=[ 15], 95.00th=[ 28], 00:13:35.173 | 99.00th=[ 59], 99.50th=[ 401], 99.90th=[ 405], 99.95th=[ 405], 00:13:35.173 | 99.99th=[ 405] 00:13:35.173 write: IOPS=135, BW=16.9MiB/s (17.7MB/s)(142MiB/8377msec); 0 zone resets 00:13:35.173 slat (usec): min=36, max=4585, avg=132.64, stdev=250.03 00:13:35.173 clat (msec): min=18, max=217, avg=58.52, stdev=23.47 00:13:35.173 lat (msec): min=18, max=217, avg=58.65, stdev=23.46 00:13:35.173 clat percentiles (msec): 00:13:35.173 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 38], 20.00th=[ 41], 00:13:35.173 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 51], 60.00th=[ 57], 00:13:35.173 | 70.00th=[ 65], 80.00th=[ 77], 90.00th=[ 88], 95.00th=[ 103], 00:13:35.173 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 209], 99.95th=[ 218], 00:13:35.173 | 99.99th=[ 218] 00:13:35.173 bw ( KiB/s): min= 768, max=25037, per=1.04%, avg=14948.11, stdev=6329.10, samples=19 00:13:35.173 iops : min= 6, max= 195, avg=116.53, stdev=49.52, samples=19 00:13:35.173 lat (msec) : 4=7.01%, 10=31.94%, 20=7.72%, 50=26.62%, 100=23.65% 00:13:35.173 lat (msec) : 250=2.71%, 500=0.35% 00:13:35.173 cpu : usr=0.82%, sys=0.32%, ctx=3714, majf=0, minf=3 00:13:35.173 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.173 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.173 issued rwts: total=1120,1134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.173 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.173 job69: (groupid=0, jobs=1): err= 0: pid=85376: Tue Jul 23 22:15:07 2024 00:13:35.173 read: IOPS=138, BW=17.3MiB/s (18.1MB/s)(160MiB/9250msec) 00:13:35.173 slat (usec): min=5, max=1173, avg=41.09, stdev=77.63 00:13:35.173 clat (usec): min=2144, max=37060, avg=7870.68, stdev=3821.48 00:13:35.173 lat (usec): min=2164, max=37073, avg=7911.77, stdev=3822.58 00:13:35.173 clat percentiles (usec): 00:13:35.173 | 1.00th=[ 3458], 5.00th=[ 3818], 10.00th=[ 4228], 20.00th=[ 4817], 00:13:35.173 | 30.00th=[ 5473], 40.00th=[ 6128], 50.00th=[ 7177], 60.00th=[ 8029], 00:13:35.173 | 70.00th=[ 8979], 80.00th=[ 9896], 90.00th=[12256], 95.00th=[14877], 00:13:35.173 | 99.00th=[21103], 99.50th=[26870], 99.90th=[33424], 99.95th=[36963], 00:13:35.173 | 99.99th=[36963] 00:13:35.173 write: IOPS=149, BW=18.7MiB/s (19.6MB/s)(164MiB/8768msec); 0 zone resets 00:13:35.173 slat (usec): min=28, max=3874, avg=128.12, stdev=201.72 00:13:35.173 clat (msec): min=10, max=196, avg=53.11, stdev=23.19 00:13:35.173 lat (msec): min=10, max=196, avg=53.24, stdev=23.19 00:13:35.173 clat percentiles (msec): 00:13:35.173 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 38], 00:13:35.173 | 30.00th=[ 41], 40.00th=[ 44], 50.00th=[ 47], 60.00th=[ 50], 00:13:35.173 | 70.00th=[ 56], 80.00th=[ 64], 90.00th=[ 82], 95.00th=[ 101], 00:13:35.173 | 99.00th=[ 142], 99.50th=[ 169], 99.90th=[ 190], 99.95th=[ 197], 00:13:35.173 | 99.99th=[ 197] 00:13:35.173 bw ( KiB/s): min= 6656, max=27904, per=1.16%, avg=16670.95, stdev=6264.39, samples=20 00:13:35.173 iops : min= 52, max= 218, avg=130.15, stdev=49.00, samples=20 00:13:35.173 lat (msec) : 4=3.94%, 10=35.79%, 20=9.23%, 50=31.51%, 100=17.03% 00:13:35.173 lat (msec) : 250=2.51% 00:13:35.173 cpu : usr=0.94%, sys=0.40%, ctx=4264, majf=0, minf=1 00:13:35.173 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.173 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.173 issued rwts: total=1280,1310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.173 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.173 job70: (groupid=0, jobs=1): err= 0: pid=85377: Tue Jul 23 22:15:07 2024 00:13:35.173 read: IOPS=95, BW=11.9MiB/s (12.5MB/s)(100MiB/8419msec) 00:13:35.173 slat (usec): min=5, max=1091, avg=45.80, stdev=91.77 00:13:35.173 clat (usec): min=3422, max=45585, avg=11062.70, stdev=6690.85 00:13:35.173 lat (usec): min=3940, max=45599, avg=11108.50, stdev=6688.16 00:13:35.173 clat percentiles (usec): 00:13:35.173 | 1.00th=[ 4178], 5.00th=[ 4621], 10.00th=[ 5211], 20.00th=[ 5997], 00:13:35.173 | 30.00th=[ 6783], 40.00th=[ 7767], 50.00th=[ 8979], 60.00th=[10945], 00:13:35.173 | 70.00th=[12649], 80.00th=[14091], 90.00th=[20317], 95.00th=[24249], 00:13:35.173 | 99.00th=[35914], 99.50th=[39584], 99.90th=[45351], 99.95th=[45351], 00:13:35.173 | 99.99th=[45351] 00:13:35.173 write: IOPS=95, BW=11.9MiB/s (12.5MB/s)(106MiB/8912msec); 0 zone resets 00:13:35.173 slat (usec): min=34, max=5527, avg=129.02, stdev=259.51 00:13:35.173 clat (msec): min=32, max=361, avg=83.14, stdev=37.01 00:13:35.173 lat (msec): min=32, max=361, avg=83.27, stdev=37.01 00:13:35.173 clat percentiles (msec): 00:13:35.173 | 1.00th=[ 38], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 56], 00:13:35.173 | 30.00th=[ 60], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 80], 00:13:35.173 | 70.00th=[ 93], 80.00th=[ 108], 90.00th=[ 129], 95.00th=[ 146], 00:13:35.173 | 99.00th=[ 218], 99.50th=[ 253], 99.90th=[ 363], 99.95th=[ 363], 00:13:35.173 | 99.99th=[ 363] 00:13:35.173 bw ( KiB/s): min= 2560, max=18468, per=0.75%, avg=10774.95, stdev=4731.86, samples=19 00:13:35.173 iops : min= 20, max= 144, avg=83.89, stdev=37.12, samples=19 00:13:35.173 lat (msec) : 4=0.06%, 10=26.56%, 20=16.74%, 50=8.25%, 100=35.96% 00:13:35.173 lat (msec) : 250=12.13%, 500=0.30% 00:13:35.173 cpu : usr=0.56%, sys=0.28%, ctx=2720, majf=0, minf=3 00:13:35.173 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.173 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.173 issued rwts: total=800,849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.173 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.173 job71: (groupid=0, jobs=1): err= 0: pid=85378: Tue Jul 23 22:15:07 2024 00:13:35.173 read: IOPS=80, BW=10.1MiB/s (10.6MB/s)(80.0MiB/7933msec) 00:13:35.173 slat (usec): min=5, max=856, avg=52.79, stdev=95.50 00:13:35.173 clat (msec): min=2, max=225, avg=20.84, stdev=29.45 00:13:35.173 lat (msec): min=3, max=225, avg=20.90, stdev=29.44 00:13:35.173 clat percentiles (msec): 00:13:35.173 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:13:35.173 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 15], 00:13:35.173 | 70.00th=[ 18], 80.00th=[ 22], 90.00th=[ 36], 95.00th=[ 55], 00:13:35.173 | 99.00th=[ 218], 99.50th=[ 222], 99.90th=[ 226], 99.95th=[ 226], 00:13:35.173 | 99.99th=[ 226] 00:13:35.173 write: IOPS=90, BW=11.3MiB/s (11.9MB/s)(94.2MiB/8338msec); 0 zone resets 00:13:35.173 slat (usec): min=28, max=3340, avg=133.51, stdev=203.98 00:13:35.173 clat (msec): min=32, max=460, avg=87.91, stdev=42.64 00:13:35.173 lat (msec): min=32, max=460, avg=88.05, stdev=42.66 00:13:35.173 clat percentiles (msec): 00:13:35.173 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 53], 20.00th=[ 57], 00:13:35.173 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 77], 60.00th=[ 85], 00:13:35.173 | 70.00th=[ 93], 80.00th=[ 109], 90.00th=[ 144], 95.00th=[ 171], 00:13:35.173 | 99.00th=[ 213], 99.50th=[ 305], 99.90th=[ 460], 99.95th=[ 460], 00:13:35.173 | 99.99th=[ 460] 00:13:35.173 bw ( KiB/s): min= 764, max=16031, per=0.65%, avg=9389.68, stdev=4862.50, samples=19 00:13:35.173 iops : min= 5, max= 125, avg=72.84, stdev=38.10, samples=19 00:13:35.173 lat (msec) : 4=0.29%, 10=15.28%, 20=19.08%, 50=11.55%, 100=38.88% 00:13:35.173 lat (msec) : 250=14.56%, 500=0.36% 00:13:35.174 cpu : usr=0.55%, sys=0.18%, ctx=2461, majf=0, minf=5 00:13:35.174 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.174 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.174 issued rwts: total=640,754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.174 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.174 job72: (groupid=0, jobs=1): err= 0: pid=85379: Tue Jul 23 22:15:07 2024 00:13:35.174 read: IOPS=84, BW=10.6MiB/s (11.1MB/s)(88.2MiB/8357msec) 00:13:35.174 slat (usec): min=5, max=1563, avg=48.44, stdev=114.51 00:13:35.174 clat (msec): min=3, max=169, avg=13.81, stdev=16.45 00:13:35.174 lat (msec): min=3, max=169, avg=13.86, stdev=16.44 00:13:35.174 clat percentiles (msec): 00:13:35.174 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:13:35.174 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:13:35.174 | 70.00th=[ 13], 80.00th=[ 16], 90.00th=[ 21], 95.00th=[ 26], 00:13:35.174 | 99.00th=[ 129], 99.50th=[ 165], 99.90th=[ 169], 99.95th=[ 169], 00:13:35.174 | 99.99th=[ 169] 00:13:35.174 write: IOPS=91, BW=11.4MiB/s (11.9MB/s)(100MiB/8779msec); 0 zone resets 00:13:35.174 slat (usec): min=37, max=65345, avg=235.36, stdev=2318.80 00:13:35.174 clat (msec): min=8, max=397, avg=86.88, stdev=39.70 00:13:35.174 lat (msec): min=8, max=397, avg=87.12, stdev=39.65 00:13:35.174 clat percentiles (msec): 00:13:35.174 | 1.00th=[ 28], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 56], 00:13:35.174 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 78], 60.00th=[ 88], 00:13:35.174 | 70.00th=[ 99], 80.00th=[ 111], 90.00th=[ 134], 95.00th=[ 155], 00:13:35.174 | 99.00th=[ 222], 99.50th=[ 249], 99.90th=[ 397], 99.95th=[ 397], 00:13:35.174 | 99.99th=[ 397] 00:13:35.174 bw ( KiB/s): min= 1792, max=18139, per=0.71%, avg=10236.95, stdev=4394.93, samples=20 00:13:35.174 iops : min= 14, max= 141, avg=79.80, stdev=34.40, samples=20 00:13:35.174 lat (msec) : 4=0.20%, 10=17.40%, 20=24.50%, 50=9.16%, 100=33.53% 00:13:35.174 lat (msec) : 250=14.94%, 500=0.27% 00:13:35.174 cpu : usr=0.58%, sys=0.27%, ctx=2473, majf=0, minf=9 00:13:35.174 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.174 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.174 issued rwts: total=706,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.174 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.174 job73: (groupid=0, jobs=1): err= 0: pid=85380: Tue Jul 23 22:15:07 2024 00:13:35.174 read: IOPS=95, BW=11.9MiB/s (12.5MB/s)(100MiB/8398msec) 00:13:35.174 slat (usec): min=4, max=845, avg=43.79, stdev=85.03 00:13:35.174 clat (usec): min=4611, max=52527, avg=12021.38, stdev=6948.50 00:13:35.174 lat (usec): min=4663, max=52648, avg=12065.18, stdev=6948.32 00:13:35.174 clat percentiles (usec): 00:13:35.174 | 1.00th=[ 4883], 5.00th=[ 5538], 10.00th=[ 6194], 20.00th=[ 6915], 00:13:35.174 | 30.00th=[ 7439], 40.00th=[ 9372], 50.00th=[10814], 60.00th=[11600], 00:13:35.174 | 70.00th=[12780], 80.00th=[14484], 90.00th=[19530], 95.00th=[26870], 00:13:35.174 | 99.00th=[41157], 99.50th=[44303], 99.90th=[52691], 99.95th=[52691], 00:13:35.174 | 99.99th=[52691] 00:13:35.174 write: IOPS=99, BW=12.4MiB/s (13.0MB/s)(109MiB/8798msec); 0 zone resets 00:13:35.174 slat (usec): min=34, max=3260, avg=155.07, stdev=253.04 00:13:35.174 clat (msec): min=45, max=333, avg=79.68, stdev=35.69 00:13:35.174 lat (msec): min=45, max=333, avg=79.83, stdev=35.69 00:13:35.174 clat percentiles (msec): 00:13:35.174 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 55], 00:13:35.174 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 74], 00:13:35.174 | 70.00th=[ 83], 80.00th=[ 103], 90.00th=[ 126], 95.00th=[ 148], 00:13:35.174 | 99.00th=[ 218], 99.50th=[ 232], 99.90th=[ 334], 99.95th=[ 334], 00:13:35.174 | 99.99th=[ 334] 00:13:35.174 bw ( KiB/s): min= 4352, max=17664, per=0.76%, avg=10918.74, stdev=4707.72, samples=19 00:13:35.174 iops : min= 34, max= 138, avg=85.11, stdev=36.85, samples=19 00:13:35.174 lat (msec) : 10=19.94%, 20=23.28%, 50=7.64%, 100=38.03%, 250=10.93% 00:13:35.174 lat (msec) : 500=0.18% 00:13:35.174 cpu : usr=0.57%, sys=0.31%, ctx=2840, majf=0, minf=1 00:13:35.174 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.174 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.174 issued rwts: total=800,875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.174 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.174 job74: (groupid=0, jobs=1): err= 0: pid=85381: Tue Jul 23 22:15:07 2024 00:13:35.174 read: IOPS=92, BW=11.5MiB/s (12.1MB/s)(100MiB/8667msec) 00:13:35.174 slat (usec): min=5, max=2691, avg=54.40, stdev=146.26 00:13:35.174 clat (msec): min=3, max=150, avg=15.56, stdev=15.68 00:13:35.174 lat (msec): min=3, max=150, avg=15.61, stdev=15.68 00:13:35.174 clat percentiles (msec): 00:13:35.174 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:13:35.174 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 14], 00:13:35.174 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 23], 95.00th=[ 32], 00:13:35.174 | 99.00th=[ 65], 99.50th=[ 146], 99.90th=[ 150], 99.95th=[ 150], 00:13:35.174 | 99.99th=[ 150] 00:13:35.174 write: IOPS=95, BW=11.9MiB/s (12.5MB/s)(101MiB/8474msec); 0 zone resets 00:13:35.174 slat (usec): min=33, max=4623, avg=144.83, stdev=235.50 00:13:35.174 clat (msec): min=3, max=286, avg=83.19, stdev=34.41 00:13:35.174 lat (msec): min=4, max=286, avg=83.33, stdev=34.44 00:13:35.174 clat percentiles (msec): 00:13:35.174 | 1.00th=[ 12], 5.00th=[ 50], 10.00th=[ 53], 20.00th=[ 57], 00:13:35.174 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 77], 60.00th=[ 86], 00:13:35.174 | 70.00th=[ 95], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 133], 00:13:35.174 | 99.00th=[ 236], 99.50th=[ 251], 99.90th=[ 288], 99.95th=[ 288], 00:13:35.174 | 99.99th=[ 288] 00:13:35.174 bw ( KiB/s): min= 3328, max=20264, per=0.72%, avg=10345.16, stdev=4446.66, samples=19 00:13:35.174 iops : min= 26, max= 158, avg=80.63, stdev=34.69, samples=19 00:13:35.174 lat (msec) : 4=0.12%, 10=16.23%, 20=26.43%, 50=8.77%, 100=35.26% 00:13:35.174 lat (msec) : 250=12.87%, 500=0.31% 00:13:35.174 cpu : usr=0.63%, sys=0.25%, ctx=2669, majf=0, minf=7 00:13:35.174 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.174 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.174 issued rwts: total=800,808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.174 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.174 job75: (groupid=0, jobs=1): err= 0: pid=85382: Tue Jul 23 22:15:07 2024 00:13:35.174 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(100MiB/8763msec) 00:13:35.174 slat (usec): min=5, max=1030, avg=43.91, stdev=86.02 00:13:35.174 clat (usec): min=3573, max=50564, avg=11866.44, stdev=7623.67 00:13:35.174 lat (usec): min=3586, max=50576, avg=11910.34, stdev=7620.92 00:13:35.174 clat percentiles (usec): 00:13:35.174 | 1.00th=[ 3752], 5.00th=[ 3982], 10.00th=[ 4228], 20.00th=[ 5669], 00:13:35.174 | 30.00th=[ 6783], 40.00th=[ 7504], 50.00th=[ 9110], 60.00th=[11469], 00:13:35.174 | 70.00th=[14484], 80.00th=[18744], 90.00th=[21103], 95.00th=[25560], 00:13:35.174 | 99.00th=[34866], 99.50th=[43779], 99.90th=[50594], 99.95th=[50594], 00:13:35.174 | 99.99th=[50594] 00:13:35.174 write: IOPS=94, BW=11.8MiB/s (12.4MB/s)(105MiB/8846msec); 0 zone resets 00:13:35.174 slat (usec): min=36, max=12884, avg=141.80, stdev=467.63 00:13:35.174 clat (msec): min=19, max=238, avg=83.54, stdev=34.88 00:13:35.174 lat (msec): min=20, max=239, avg=83.69, stdev=34.86 00:13:35.174 clat percentiles (msec): 00:13:35.174 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 51], 20.00th=[ 54], 00:13:35.174 | 30.00th=[ 60], 40.00th=[ 67], 50.00th=[ 74], 60.00th=[ 85], 00:13:35.174 | 70.00th=[ 97], 80.00th=[ 110], 90.00th=[ 130], 95.00th=[ 148], 00:13:35.174 | 99.00th=[ 205], 99.50th=[ 218], 99.90th=[ 239], 99.95th=[ 239], 00:13:35.174 | 99.99th=[ 239] 00:13:35.174 bw ( KiB/s): min= 3840, max=18944, per=0.75%, avg=10778.95, stdev=4596.84, samples=19 00:13:35.174 iops : min= 30, max= 148, avg=84.21, stdev=35.91, samples=19 00:13:35.174 lat (msec) : 4=2.50%, 10=23.44%, 20=16.67%, 50=11.17%, 100=32.42% 00:13:35.174 lat (msec) : 250=13.80% 00:13:35.174 cpu : usr=0.58%, sys=0.29%, ctx=2682, majf=0, minf=7 00:13:35.174 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.174 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.174 issued rwts: total=800,838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.174 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.174 job76: (groupid=0, jobs=1): err= 0: pid=85383: Tue Jul 23 22:15:07 2024 00:13:35.174 read: IOPS=92, BW=11.6MiB/s (12.2MB/s)(100MiB/8608msec) 00:13:35.174 slat (usec): min=5, max=1424, avg=49.81, stdev=106.70 00:13:35.174 clat (usec): min=4123, max=76604, avg=12699.32, stdev=9613.16 00:13:35.174 lat (usec): min=4150, max=76618, avg=12749.14, stdev=9633.73 00:13:35.174 clat percentiles (usec): 00:13:35.174 | 1.00th=[ 5211], 5.00th=[ 5866], 10.00th=[ 6390], 20.00th=[ 6915], 00:13:35.174 | 30.00th=[ 7242], 40.00th=[ 8455], 50.00th=[10159], 60.00th=[11076], 00:13:35.174 | 70.00th=[13042], 80.00th=[16909], 90.00th=[20317], 95.00th=[24511], 00:13:35.174 | 99.00th=[62129], 99.50th=[70779], 99.90th=[77071], 99.95th=[77071], 00:13:35.174 | 99.99th=[77071] 00:13:35.174 write: IOPS=99, BW=12.5MiB/s (13.1MB/s)(109MiB/8743msec); 0 zone resets 00:13:35.174 slat (usec): min=33, max=2260, avg=131.41, stdev=181.07 00:13:35.174 clat (msec): min=34, max=319, avg=79.39, stdev=38.35 00:13:35.174 lat (msec): min=35, max=319, avg=79.52, stdev=38.35 00:13:35.174 clat percentiles (msec): 00:13:35.174 | 1.00th=[ 43], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 52], 00:13:35.174 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 72], 00:13:35.174 | 70.00th=[ 83], 80.00th=[ 105], 90.00th=[ 130], 95.00th=[ 155], 00:13:35.174 | 99.00th=[ 230], 99.50th=[ 279], 99.90th=[ 321], 99.95th=[ 321], 00:13:35.174 | 99.99th=[ 321] 00:13:35.174 bw ( KiB/s): min= 3840, max=18944, per=0.75%, avg=10866.95, stdev=4952.02, samples=19 00:13:35.174 iops : min= 30, max= 148, avg=84.63, stdev=38.84, samples=19 00:13:35.174 lat (msec) : 10=23.15%, 20=19.56%, 50=11.06%, 100=34.63%, 250=11.24% 00:13:35.175 lat (msec) : 500=0.36% 00:13:35.175 cpu : usr=0.53%, sys=0.35%, ctx=2792, majf=0, minf=6 00:13:35.175 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.175 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.175 issued rwts: total=800,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.175 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.175 job77: (groupid=0, jobs=1): err= 0: pid=85384: Tue Jul 23 22:15:07 2024 00:13:35.175 read: IOPS=89, BW=11.2MiB/s (11.8MB/s)(100MiB/8912msec) 00:13:35.175 slat (usec): min=5, max=1074, avg=48.79, stdev=93.88 00:13:35.175 clat (usec): min=4513, max=67729, avg=14900.24, stdev=8906.28 00:13:35.175 lat (usec): min=4573, max=67745, avg=14949.03, stdev=8908.51 00:13:35.175 clat percentiles (usec): 00:13:35.175 | 1.00th=[ 4817], 5.00th=[ 5407], 10.00th=[ 6587], 20.00th=[ 8356], 00:13:35.175 | 30.00th=[ 9372], 40.00th=[10421], 50.00th=[11994], 60.00th=[14746], 00:13:35.175 | 70.00th=[17433], 80.00th=[20055], 90.00th=[26870], 95.00th=[33424], 00:13:35.175 | 99.00th=[45876], 99.50th=[53740], 99.90th=[67634], 99.95th=[67634], 00:13:35.175 | 99.99th=[67634] 00:13:35.175 write: IOPS=104, BW=13.0MiB/s (13.7MB/s)(112MiB/8555msec); 0 zone resets 00:13:35.175 slat (usec): min=32, max=61821, avg=210.86, stdev=2088.08 00:13:35.175 clat (msec): min=7, max=245, avg=75.84, stdev=31.33 00:13:35.175 lat (msec): min=7, max=245, avg=76.05, stdev=31.25 00:13:35.175 clat percentiles (msec): 00:13:35.175 | 1.00th=[ 19], 5.00th=[ 49], 10.00th=[ 51], 20.00th=[ 54], 00:13:35.175 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 73], 00:13:35.175 | 70.00th=[ 80], 80.00th=[ 91], 90.00th=[ 120], 95.00th=[ 142], 00:13:35.175 | 99.00th=[ 188], 99.50th=[ 220], 99.90th=[ 245], 99.95th=[ 245], 00:13:35.175 | 99.99th=[ 245] 00:13:35.175 bw ( KiB/s): min= 3591, max=19238, per=0.79%, avg=11327.40, stdev=5217.46, samples=20 00:13:35.175 iops : min= 28, max= 150, avg=88.35, stdev=40.81, samples=20 00:13:35.175 lat (msec) : 10=17.13%, 20=21.09%, 50=14.29%, 100=40.17%, 250=7.32% 00:13:35.175 cpu : usr=0.63%, sys=0.30%, ctx=2856, majf=0, minf=3 00:13:35.175 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.175 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.175 issued rwts: total=800,893,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.175 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.175 job78: (groupid=0, jobs=1): err= 0: pid=85385: Tue Jul 23 22:15:07 2024 00:13:35.175 read: IOPS=88, BW=11.1MiB/s (11.6MB/s)(100MiB/9029msec) 00:13:35.175 slat (usec): min=5, max=1408, avg=59.45, stdev=122.44 00:13:35.175 clat (usec): min=4664, max=64096, avg=14592.69, stdev=7916.00 00:13:35.175 lat (usec): min=4747, max=64112, avg=14652.14, stdev=7913.49 00:13:35.175 clat percentiles (usec): 00:13:35.175 | 1.00th=[ 5473], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 8455], 00:13:35.175 | 30.00th=[ 9372], 40.00th=[10945], 50.00th=[12387], 60.00th=[13829], 00:13:35.175 | 70.00th=[16188], 80.00th=[20317], 90.00th=[23462], 95.00th=[30278], 00:13:35.175 | 99.00th=[38536], 99.50th=[47973], 99.90th=[64226], 99.95th=[64226], 00:13:35.175 | 99.99th=[64226] 00:13:35.175 write: IOPS=105, BW=13.1MiB/s (13.8MB/s)(113MiB/8606msec); 0 zone resets 00:13:35.175 slat (usec): min=37, max=57201, avg=202.10, stdev=1916.22 00:13:35.175 clat (msec): min=6, max=300, avg=75.26, stdev=37.87 00:13:35.175 lat (msec): min=6, max=301, avg=75.46, stdev=37.81 00:13:35.175 clat percentiles (msec): 00:13:35.175 | 1.00th=[ 9], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 53], 00:13:35.175 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 71], 00:13:35.175 | 70.00th=[ 78], 80.00th=[ 88], 90.00th=[ 117], 95.00th=[ 153], 00:13:35.175 | 99.00th=[ 251], 99.50th=[ 266], 99.90th=[ 300], 99.95th=[ 300], 00:13:35.175 | 99.99th=[ 300] 00:13:35.175 bw ( KiB/s): min= 1539, max=21504, per=0.80%, avg=11479.00, stdev=5867.18, samples=20 00:13:35.175 iops : min= 12, max= 168, avg=89.55, stdev=45.91, samples=20 00:13:35.175 lat (msec) : 10=17.02%, 20=20.95%, 50=15.32%, 100=39.38%, 250=6.75% 00:13:35.175 lat (msec) : 500=0.59% 00:13:35.175 cpu : usr=0.66%, sys=0.23%, ctx=2857, majf=0, minf=1 00:13:35.175 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.175 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.175 issued rwts: total=800,904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.175 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.175 job79: (groupid=0, jobs=1): err= 0: pid=85386: Tue Jul 23 22:15:07 2024 00:13:35.175 read: IOPS=89, BW=11.2MiB/s (11.7MB/s)(100MiB/8958msec) 00:13:35.175 slat (usec): min=5, max=1489, avg=45.37, stdev=94.41 00:13:35.175 clat (msec): min=3, max=185, avg=15.16, stdev=20.54 00:13:35.175 lat (msec): min=4, max=185, avg=15.21, stdev=20.54 00:13:35.175 clat percentiles (msec): 00:13:35.175 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:13:35.175 | 30.00th=[ 8], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:13:35.175 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 22], 95.00th=[ 34], 00:13:35.175 | 99.00th=[ 138], 99.50th=[ 169], 99.90th=[ 186], 99.95th=[ 186], 00:13:35.175 | 99.99th=[ 186] 00:13:35.175 write: IOPS=104, BW=13.1MiB/s (13.7MB/s)(112MiB/8517msec); 0 zone resets 00:13:35.175 slat (usec): min=26, max=20570, avg=151.48, stdev=705.87 00:13:35.175 clat (msec): min=2, max=239, avg=75.68, stdev=34.90 00:13:35.175 lat (msec): min=2, max=240, avg=75.84, stdev=34.88 00:13:35.175 clat percentiles (msec): 00:13:35.175 | 1.00th=[ 6], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 52], 00:13:35.175 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 65], 60.00th=[ 72], 00:13:35.175 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 117], 95.00th=[ 157], 00:13:35.175 | 99.00th=[ 203], 99.50th=[ 215], 99.90th=[ 241], 99.95th=[ 241], 00:13:35.175 | 99.99th=[ 241] 00:13:35.175 bw ( KiB/s): min= 2304, max=21760, per=0.81%, avg=11711.39, stdev=5162.99, samples=18 00:13:35.175 iops : min= 18, max= 170, avg=91.44, stdev=40.37, samples=18 00:13:35.175 lat (msec) : 4=0.41%, 10=22.75%, 20=19.03%, 50=11.17%, 100=37.23% 00:13:35.175 lat (msec) : 250=9.40% 00:13:35.175 cpu : usr=0.61%, sys=0.28%, ctx=2766, majf=0, minf=3 00:13:35.175 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.175 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.175 issued rwts: total=800,892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.175 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.175 job80: (groupid=0, jobs=1): err= 0: pid=85393: Tue Jul 23 22:15:07 2024 00:13:35.175 read: IOPS=100, BW=12.6MiB/s (13.2MB/s)(100MiB/7929msec) 00:13:35.175 slat (usec): min=5, max=1908, avg=53.47, stdev=129.50 00:13:35.175 clat (msec): min=3, max=128, avg=12.34, stdev=14.52 00:13:35.175 lat (msec): min=3, max=128, avg=12.39, stdev=14.52 00:13:35.175 clat percentiles (msec): 00:13:35.175 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:13:35.175 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:13:35.175 | 70.00th=[ 11], 80.00th=[ 13], 90.00th=[ 18], 95.00th=[ 30], 00:13:35.175 | 99.00th=[ 65], 99.50th=[ 124], 99.90th=[ 129], 99.95th=[ 129], 00:13:35.175 | 99.99th=[ 129] 00:13:35.175 write: IOPS=92, BW=11.5MiB/s (12.1MB/s)(101MiB/8779msec); 0 zone resets 00:13:35.175 slat (usec): min=34, max=2378, avg=136.33, stdev=195.37 00:13:35.175 clat (msec): min=44, max=283, avg=86.02, stdev=32.14 00:13:35.175 lat (msec): min=44, max=283, avg=86.16, stdev=32.14 00:13:35.175 clat percentiles (msec): 00:13:35.175 | 1.00th=[ 48], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 60], 00:13:35.175 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 78], 60.00th=[ 87], 00:13:35.175 | 70.00th=[ 96], 80.00th=[ 110], 90.00th=[ 132], 95.00th=[ 148], 00:13:35.175 | 99.00th=[ 186], 99.50th=[ 211], 99.90th=[ 284], 99.95th=[ 284], 00:13:35.175 | 99.99th=[ 284] 00:13:35.175 bw ( KiB/s): min= 4096, max=16828, per=0.73%, avg=10445.63, stdev=3978.74, samples=19 00:13:35.175 iops : min= 32, max= 131, avg=81.47, stdev=30.94, samples=19 00:13:35.175 lat (msec) : 4=0.43%, 10=30.42%, 20=15.27%, 50=3.10%, 100=37.24% 00:13:35.175 lat (msec) : 250=13.41%, 500=0.12% 00:13:35.175 cpu : usr=0.54%, sys=0.26%, ctx=2780, majf=0, minf=5 00:13:35.175 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.175 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.175 issued rwts: total=800,811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.175 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.175 job81: (groupid=0, jobs=1): err= 0: pid=85395: Tue Jul 23 22:15:07 2024 00:13:35.175 read: IOPS=87, BW=10.9MiB/s (11.4MB/s)(100MiB/9174msec) 00:13:35.175 slat (usec): min=5, max=1682, avg=49.90, stdev=115.35 00:13:35.175 clat (msec): min=3, max=110, avg=12.76, stdev=10.55 00:13:35.175 lat (msec): min=3, max=110, avg=12.81, stdev=10.56 00:13:35.175 clat percentiles (msec): 00:13:35.175 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:13:35.175 | 30.00th=[ 8], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:13:35.175 | 70.00th=[ 14], 80.00th=[ 16], 90.00th=[ 19], 95.00th=[ 24], 00:13:35.175 | 99.00th=[ 46], 99.50th=[ 94], 99.90th=[ 110], 99.95th=[ 110], 00:13:35.175 | 99.99th=[ 110] 00:13:35.175 write: IOPS=109, BW=13.6MiB/s (14.3MB/s)(120MiB/8793msec); 0 zone resets 00:13:35.175 slat (usec): min=37, max=23679, avg=142.83, stdev=776.71 00:13:35.175 clat (msec): min=6, max=313, avg=72.70, stdev=37.85 00:13:35.175 lat (msec): min=6, max=313, avg=72.85, stdev=37.82 00:13:35.175 clat percentiles (msec): 00:13:35.175 | 1.00th=[ 14], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 52], 00:13:35.175 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 68], 00:13:35.175 | 70.00th=[ 75], 80.00th=[ 86], 90.00th=[ 104], 95.00th=[ 144], 00:13:35.175 | 99.00th=[ 279], 99.50th=[ 300], 99.90th=[ 313], 99.95th=[ 313], 00:13:35.175 | 99.99th=[ 313] 00:13:35.175 bw ( KiB/s): min= 2048, max=24576, per=0.85%, avg=12182.45, stdev=5884.66, samples=20 00:13:35.175 iops : min= 16, max= 192, avg=95.05, stdev=46.03, samples=20 00:13:35.175 lat (msec) : 4=0.11%, 10=21.14%, 20=21.36%, 50=10.74%, 100=40.45% 00:13:35.175 lat (msec) : 250=5.57%, 500=0.62% 00:13:35.175 cpu : usr=0.69%, sys=0.24%, ctx=2868, majf=0, minf=3 00:13:35.176 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.176 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.176 issued rwts: total=800,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.176 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.176 job82: (groupid=0, jobs=1): err= 0: pid=85399: Tue Jul 23 22:15:07 2024 00:13:35.176 read: IOPS=89, BW=11.2MiB/s (11.7MB/s)(100MiB/8948msec) 00:13:35.176 slat (usec): min=6, max=2496, avg=56.64, stdev=147.34 00:13:35.176 clat (msec): min=5, max=146, avg=15.52, stdev=16.63 00:13:35.176 lat (msec): min=5, max=146, avg=15.58, stdev=16.63 00:13:35.176 clat percentiles (msec): 00:13:35.176 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:13:35.176 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:13:35.176 | 70.00th=[ 14], 80.00th=[ 16], 90.00th=[ 22], 95.00th=[ 36], 00:13:35.176 | 99.00th=[ 118], 99.50th=[ 138], 99.90th=[ 146], 99.95th=[ 146], 00:13:35.176 | 99.99th=[ 146] 00:13:35.176 write: IOPS=111, BW=13.9MiB/s (14.6MB/s)(118MiB/8503msec); 0 zone resets 00:13:35.176 slat (usec): min=36, max=10764, avg=140.14, stdev=399.32 00:13:35.176 clat (msec): min=18, max=268, avg=71.15, stdev=32.60 00:13:35.176 lat (msec): min=18, max=268, avg=71.29, stdev=32.59 00:13:35.176 clat percentiles (msec): 00:13:35.176 | 1.00th=[ 28], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 53], 00:13:35.176 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 65], 00:13:35.176 | 70.00th=[ 71], 80.00th=[ 84], 90.00th=[ 100], 95.00th=[ 140], 00:13:35.176 | 99.00th=[ 226], 99.50th=[ 241], 99.90th=[ 268], 99.95th=[ 268], 00:13:35.176 | 99.99th=[ 268] 00:13:35.176 bw ( KiB/s): min= 1788, max=21504, per=0.83%, avg=12016.35, stdev=5972.35, samples=20 00:13:35.176 iops : min= 13, max= 168, avg=93.70, stdev=46.83, samples=20 00:13:35.176 lat (msec) : 10=15.51%, 20=24.84%, 50=11.10%, 100=42.70%, 250=5.72% 00:13:35.176 lat (msec) : 500=0.11% 00:13:35.176 cpu : usr=0.62%, sys=0.34%, ctx=2773, majf=0, minf=3 00:13:35.176 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.176 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.176 issued rwts: total=800,947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.176 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.176 job83: (groupid=0, jobs=1): err= 0: pid=85401: Tue Jul 23 22:15:07 2024 00:13:35.176 read: IOPS=87, BW=11.0MiB/s (11.5MB/s)(88.6MiB/8060msec) 00:13:35.176 slat (usec): min=5, max=1080, avg=46.95, stdev=98.30 00:13:35.176 clat (msec): min=2, max=166, avg=15.18, stdev=23.81 00:13:35.176 lat (msec): min=2, max=167, avg=15.23, stdev=23.83 00:13:35.176 clat percentiles (msec): 00:13:35.176 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:13:35.176 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11], 00:13:35.176 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 21], 95.00th=[ 42], 00:13:35.176 | 99.00th=[ 148], 99.50th=[ 167], 99.90th=[ 167], 99.95th=[ 167], 00:13:35.176 | 99.99th=[ 167] 00:13:35.176 write: IOPS=92, BW=11.6MiB/s (12.1MB/s)(100MiB/8644msec); 0 zone resets 00:13:35.176 slat (usec): min=36, max=3736, avg=145.77, stdev=289.36 00:13:35.176 clat (msec): min=46, max=207, avg=85.87, stdev=28.13 00:13:35.176 lat (msec): min=46, max=207, avg=86.01, stdev=28.12 00:13:35.176 clat percentiles (msec): 00:13:35.176 | 1.00th=[ 48], 5.00th=[ 53], 10.00th=[ 57], 20.00th=[ 61], 00:13:35.176 | 30.00th=[ 67], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 87], 00:13:35.176 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 128], 95.00th=[ 146], 00:13:35.176 | 99.00th=[ 161], 99.50th=[ 180], 99.90th=[ 207], 99.95th=[ 207], 00:13:35.176 | 99.99th=[ 207] 00:13:35.176 bw ( KiB/s): min= 5069, max=17049, per=0.72%, avg=10344.16, stdev=3217.90, samples=19 00:13:35.176 iops : min= 39, max= 133, avg=80.37, stdev=25.24, samples=19 00:13:35.176 lat (msec) : 4=0.53%, 10=27.44%, 20=14.18%, 50=3.84%, 100=40.16% 00:13:35.176 lat (msec) : 250=13.85% 00:13:35.176 cpu : usr=0.55%, sys=0.25%, ctx=2492, majf=0, minf=7 00:13:35.176 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.176 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.176 issued rwts: total=709,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.176 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.176 job84: (groupid=0, jobs=1): err= 0: pid=85402: Tue Jul 23 22:15:07 2024 00:13:35.176 read: IOPS=90, BW=11.3MiB/s (11.9MB/s)(100MiB/8819msec) 00:13:35.176 slat (usec): min=5, max=725, avg=36.60, stdev=68.27 00:13:35.176 clat (msec): min=3, max=110, avg=13.08, stdev=10.49 00:13:35.176 lat (msec): min=3, max=110, avg=13.11, stdev=10.49 00:13:35.176 clat percentiles (msec): 00:13:35.176 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:13:35.176 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:13:35.176 | 70.00th=[ 13], 80.00th=[ 16], 90.00th=[ 21], 95.00th=[ 27], 00:13:35.176 | 99.00th=[ 43], 99.50th=[ 104], 99.90th=[ 111], 99.95th=[ 111], 00:13:35.176 | 99.99th=[ 111] 00:13:35.176 write: IOPS=102, BW=12.9MiB/s (13.5MB/s)(113MiB/8757msec); 0 zone resets 00:13:35.176 slat (usec): min=38, max=39434, avg=190.56, stdev=1354.36 00:13:35.176 clat (msec): min=11, max=273, avg=76.51, stdev=35.01 00:13:35.176 lat (msec): min=13, max=273, avg=76.70, stdev=34.95 00:13:35.176 clat percentiles (msec): 00:13:35.176 | 1.00th=[ 25], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 55], 00:13:35.176 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 73], 00:13:35.176 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 114], 95.00th=[ 148], 00:13:35.176 | 99.00th=[ 224], 99.50th=[ 251], 99.90th=[ 275], 99.95th=[ 275], 00:13:35.176 | 99.99th=[ 275] 00:13:35.176 bw ( KiB/s): min= 3072, max=19161, per=0.79%, avg=11435.00, stdev=5017.15, samples=20 00:13:35.176 iops : min= 24, max= 149, avg=89.10, stdev=39.13, samples=20 00:13:35.176 lat (msec) : 4=0.41%, 10=19.58%, 20=21.99%, 50=9.88%, 100=40.56% 00:13:35.176 lat (msec) : 250=7.29%, 500=0.29% 00:13:35.176 cpu : usr=0.63%, sys=0.29%, ctx=2731, majf=0, minf=5 00:13:35.176 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.176 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.176 issued rwts: total=800,901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.176 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.176 job85: (groupid=0, jobs=1): err= 0: pid=85403: Tue Jul 23 22:15:07 2024 00:13:35.176 read: IOPS=87, BW=11.0MiB/s (11.5MB/s)(100MiB/9094msec) 00:13:35.176 slat (usec): min=6, max=2377, avg=53.52, stdev=143.44 00:13:35.176 clat (usec): min=2895, max=37995, avg=10898.53, stdev=5337.62 00:13:35.176 lat (usec): min=2923, max=38069, avg=10952.06, stdev=5332.34 00:13:35.176 clat percentiles (usec): 00:13:35.176 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 6849], 00:13:35.176 | 30.00th=[ 7898], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[10552], 00:13:35.176 | 70.00th=[11731], 80.00th=[13566], 90.00th=[16450], 95.00th=[21890], 00:13:35.176 | 99.00th=[32637], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:13:35.176 | 99.99th=[38011] 00:13:35.176 write: IOPS=106, BW=13.4MiB/s (14.0MB/s)(120MiB/8968msec); 0 zone resets 00:13:35.176 slat (usec): min=27, max=3648, avg=125.62, stdev=207.51 00:13:35.176 clat (msec): min=8, max=351, avg=74.30, stdev=43.46 00:13:35.176 lat (msec): min=8, max=351, avg=74.43, stdev=43.46 00:13:35.176 clat percentiles (msec): 00:13:35.176 | 1.00th=[ 14], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 53], 00:13:35.176 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 66], 00:13:35.176 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 148], 00:13:35.176 | 99.00th=[ 279], 99.50th=[ 313], 99.90th=[ 351], 99.95th=[ 351], 00:13:35.176 | 99.99th=[ 351] 00:13:35.176 bw ( KiB/s): min= 1792, max=23086, per=0.84%, avg=12165.90, stdev=6018.07, samples=20 00:13:35.176 iops : min= 14, max= 180, avg=94.95, stdev=46.96, samples=20 00:13:35.176 lat (msec) : 4=0.11%, 10=24.91%, 20=18.83%, 50=9.50%, 100=40.27% 00:13:35.176 lat (msec) : 250=5.18%, 500=1.19% 00:13:35.176 cpu : usr=0.66%, sys=0.27%, ctx=2866, majf=0, minf=3 00:13:35.176 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.176 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.176 issued rwts: total=800,958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.176 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.176 job86: (groupid=0, jobs=1): err= 0: pid=85404: Tue Jul 23 22:15:07 2024 00:13:35.176 read: IOPS=93, BW=11.7MiB/s (12.3MB/s)(100MiB/8541msec) 00:13:35.176 slat (usec): min=5, max=1671, avg=50.19, stdev=117.31 00:13:35.176 clat (usec): min=3307, max=47971, avg=12428.31, stdev=7974.48 00:13:35.176 lat (usec): min=3322, max=47987, avg=12478.50, stdev=7967.24 00:13:35.176 clat percentiles (usec): 00:13:35.176 | 1.00th=[ 4178], 5.00th=[ 4817], 10.00th=[ 5407], 20.00th=[ 6849], 00:13:35.176 | 30.00th=[ 7701], 40.00th=[ 8356], 50.00th=[ 9634], 60.00th=[11731], 00:13:35.176 | 70.00th=[13435], 80.00th=[17433], 90.00th=[22152], 95.00th=[26608], 00:13:35.176 | 99.00th=[43254], 99.50th=[45351], 99.90th=[47973], 99.95th=[47973], 00:13:35.176 | 99.99th=[47973] 00:13:35.176 write: IOPS=93, BW=11.7MiB/s (12.3MB/s)(103MiB/8780msec); 0 zone resets 00:13:35.176 slat (usec): min=35, max=2859, avg=145.00, stdev=244.41 00:13:35.176 clat (msec): min=37, max=287, avg=84.44, stdev=35.97 00:13:35.176 lat (msec): min=37, max=287, avg=84.59, stdev=35.96 00:13:35.176 clat percentiles (msec): 00:13:35.176 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 51], 20.00th=[ 55], 00:13:35.176 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 75], 60.00th=[ 85], 00:13:35.176 | 70.00th=[ 94], 80.00th=[ 108], 90.00th=[ 131], 95.00th=[ 153], 00:13:35.176 | 99.00th=[ 220], 99.50th=[ 245], 99.90th=[ 288], 99.95th=[ 288], 00:13:35.176 | 99.99th=[ 288] 00:13:35.176 bw ( KiB/s): min= 2816, max=17408, per=0.75%, avg=10775.00, stdev=4373.74, samples=19 00:13:35.176 iops : min= 22, max= 136, avg=83.95, stdev=34.20, samples=19 00:13:35.176 lat (msec) : 4=0.31%, 10=25.35%, 20=17.05%, 50=11.14%, 100=33.54% 00:13:35.176 lat (msec) : 250=12.43%, 500=0.18% 00:13:35.176 cpu : usr=0.63%, sys=0.25%, ctx=2739, majf=0, minf=3 00:13:35.177 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.177 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.177 issued rwts: total=800,825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.177 job87: (groupid=0, jobs=1): err= 0: pid=85405: Tue Jul 23 22:15:07 2024 00:13:35.177 read: IOPS=91, BW=11.5MiB/s (12.0MB/s)(100MiB/8727msec) 00:13:35.177 slat (usec): min=5, max=1672, avg=58.16, stdev=143.46 00:13:35.177 clat (usec): min=3111, max=88342, avg=13044.19, stdev=11526.75 00:13:35.177 lat (usec): min=3475, max=88355, avg=13102.36, stdev=11522.84 00:13:35.177 clat percentiles (usec): 00:13:35.177 | 1.00th=[ 4228], 5.00th=[ 4817], 10.00th=[ 6128], 20.00th=[ 7373], 00:13:35.177 | 30.00th=[ 7832], 40.00th=[ 8586], 50.00th=[10290], 60.00th=[11600], 00:13:35.177 | 70.00th=[12649], 80.00th=[16057], 90.00th=[21890], 95.00th=[27395], 00:13:35.177 | 99.00th=[81265], 99.50th=[85459], 99.90th=[88605], 99.95th=[88605], 00:13:35.177 | 99.99th=[88605] 00:13:35.177 write: IOPS=94, BW=11.8MiB/s (12.4MB/s)(104MiB/8752msec); 0 zone resets 00:13:35.177 slat (usec): min=34, max=57179, avg=209.08, stdev=2001.49 00:13:35.177 clat (msec): min=24, max=227, avg=83.35, stdev=32.27 00:13:35.177 lat (msec): min=24, max=227, avg=83.56, stdev=32.22 00:13:35.177 clat percentiles (msec): 00:13:35.177 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 56], 00:13:35.177 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 77], 60.00th=[ 86], 00:13:35.177 | 70.00th=[ 94], 80.00th=[ 107], 90.00th=[ 130], 95.00th=[ 150], 00:13:35.177 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 228], 99.95th=[ 228], 00:13:35.177 | 99.99th=[ 228] 00:13:35.177 bw ( KiB/s): min= 2304, max=18176, per=0.73%, avg=10505.80, stdev=4948.46, samples=20 00:13:35.177 iops : min= 18, max= 142, avg=81.85, stdev=38.74, samples=20 00:13:35.177 lat (msec) : 4=0.25%, 10=24.02%, 20=18.80%, 50=10.26%, 100=34.46% 00:13:35.177 lat (msec) : 250=12.22% 00:13:35.177 cpu : usr=0.46%, sys=0.41%, ctx=2803, majf=0, minf=1 00:13:35.177 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.177 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.177 issued rwts: total=800,828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.177 job88: (groupid=0, jobs=1): err= 0: pid=85406: Tue Jul 23 22:15:07 2024 00:13:35.177 read: IOPS=92, BW=11.6MiB/s (12.2MB/s)(100MiB/8610msec) 00:13:35.177 slat (usec): min=5, max=1015, avg=52.17, stdev=107.08 00:13:35.177 clat (usec): min=4228, max=54188, avg=11376.59, stdev=5707.40 00:13:35.177 lat (usec): min=4239, max=54205, avg=11428.76, stdev=5706.86 00:13:35.177 clat percentiles (usec): 00:13:35.177 | 1.00th=[ 4948], 5.00th=[ 5604], 10.00th=[ 5997], 20.00th=[ 7046], 00:13:35.177 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[10945], 00:13:35.177 | 70.00th=[12125], 80.00th=[14091], 90.00th=[17433], 95.00th=[20579], 00:13:35.177 | 99.00th=[33162], 99.50th=[42730], 99.90th=[54264], 99.95th=[54264], 00:13:35.177 | 99.99th=[54264] 00:13:35.177 write: IOPS=100, BW=12.6MiB/s (13.2MB/s)(112MiB/8885msec); 0 zone resets 00:13:35.177 slat (usec): min=35, max=5543, avg=141.70, stdev=295.94 00:13:35.177 clat (msec): min=40, max=293, avg=78.92, stdev=34.78 00:13:35.177 lat (msec): min=40, max=294, avg=79.06, stdev=34.79 00:13:35.177 clat percentiles (msec): 00:13:35.177 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 53], 00:13:35.177 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 75], 00:13:35.177 | 70.00th=[ 87], 80.00th=[ 100], 90.00th=[ 127], 95.00th=[ 146], 00:13:35.177 | 99.00th=[ 215], 99.50th=[ 247], 99.90th=[ 296], 99.95th=[ 296], 00:13:35.177 | 99.99th=[ 296] 00:13:35.177 bw ( KiB/s): min= 4352, max=19238, per=0.78%, avg=11232.74, stdev=4782.46, samples=19 00:13:35.177 iops : min= 34, max= 150, avg=87.53, stdev=37.32, samples=19 00:13:35.177 lat (msec) : 10=22.27%, 20=22.33%, 50=9.86%, 100=35.26%, 250=10.10% 00:13:35.177 lat (msec) : 500=0.18% 00:13:35.177 cpu : usr=0.70%, sys=0.22%, ctx=2766, majf=0, minf=3 00:13:35.177 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.177 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.177 issued rwts: total=800,893,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.177 job89: (groupid=0, jobs=1): err= 0: pid=85407: Tue Jul 23 22:15:07 2024 00:13:35.177 read: IOPS=92, BW=11.6MiB/s (12.2MB/s)(100MiB/8613msec) 00:13:35.177 slat (usec): min=5, max=1548, avg=57.63, stdev=115.65 00:13:35.177 clat (msec): min=4, max=137, avg=13.51, stdev=13.05 00:13:35.177 lat (msec): min=4, max=137, avg=13.57, stdev=13.04 00:13:35.177 clat percentiles (msec): 00:13:35.177 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:13:35.177 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:13:35.177 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 22], 95.00th=[ 27], 00:13:35.177 | 99.00th=[ 42], 99.50th=[ 129], 99.90th=[ 138], 99.95th=[ 138], 00:13:35.177 | 99.99th=[ 138] 00:13:35.177 write: IOPS=108, BW=13.6MiB/s (14.2MB/s)(118MiB/8673msec); 0 zone resets 00:13:35.177 slat (usec): min=29, max=3563, avg=129.06, stdev=219.75 00:13:35.177 clat (msec): min=32, max=269, avg=72.87, stdev=30.22 00:13:35.177 lat (msec): min=32, max=269, avg=73.00, stdev=30.24 00:13:35.177 clat percentiles (msec): 00:13:35.177 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 53], 00:13:35.177 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 69], 00:13:35.177 | 70.00th=[ 77], 80.00th=[ 88], 90.00th=[ 107], 95.00th=[ 138], 00:13:35.177 | 99.00th=[ 199], 99.50th=[ 213], 99.90th=[ 271], 99.95th=[ 271], 00:13:35.177 | 99.99th=[ 271] 00:13:35.177 bw ( KiB/s): min= 2043, max=18944, per=0.82%, avg=11850.05, stdev=5296.86, samples=19 00:13:35.177 iops : min= 15, max= 148, avg=92.37, stdev=41.38, samples=19 00:13:35.177 lat (msec) : 10=19.69%, 20=20.67%, 50=12.46%, 100=39.78%, 250=7.35% 00:13:35.177 lat (msec) : 500=0.06% 00:13:35.177 cpu : usr=0.59%, sys=0.32%, ctx=2922, majf=0, minf=7 00:13:35.177 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.177 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.177 issued rwts: total=800,942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.177 job90: (groupid=0, jobs=1): err= 0: pid=85408: Tue Jul 23 22:15:07 2024 00:13:35.177 read: IOPS=88, BW=11.1MiB/s (11.7MB/s)(100MiB/8990msec) 00:13:35.177 slat (usec): min=5, max=1402, avg=52.55, stdev=123.40 00:13:35.177 clat (usec): min=4573, max=56747, avg=11627.35, stdev=6680.08 00:13:35.177 lat (usec): min=4781, max=56756, avg=11679.90, stdev=6679.34 00:13:35.177 clat percentiles (usec): 00:13:35.177 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 7046], 00:13:35.177 | 30.00th=[ 7570], 40.00th=[ 8356], 50.00th=[10290], 60.00th=[11207], 00:13:35.177 | 70.00th=[11994], 80.00th=[14353], 90.00th=[19530], 95.00th=[24249], 00:13:35.177 | 99.00th=[42206], 99.50th=[49021], 99.90th=[56886], 99.95th=[56886], 00:13:35.177 | 99.99th=[56886] 00:13:35.177 write: IOPS=102, BW=12.8MiB/s (13.4MB/s)(114MiB/8888msec); 0 zone resets 00:13:35.177 slat (usec): min=31, max=24043, avg=172.06, stdev=885.85 00:13:35.177 clat (msec): min=10, max=337, avg=77.23, stdev=38.34 00:13:35.177 lat (msec): min=10, max=338, avg=77.40, stdev=38.28 00:13:35.177 clat percentiles (msec): 00:13:35.177 | 1.00th=[ 21], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 53], 00:13:35.177 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 65], 60.00th=[ 71], 00:13:35.177 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 120], 95.00th=[ 142], 00:13:35.177 | 99.00th=[ 236], 99.50th=[ 271], 99.90th=[ 338], 99.95th=[ 338], 00:13:35.177 | 99.99th=[ 338] 00:13:35.177 bw ( KiB/s): min= 2304, max=20736, per=0.79%, avg=11402.63, stdev=5530.34, samples=19 00:13:35.177 iops : min= 18, max= 162, avg=88.89, stdev=43.20, samples=19 00:13:35.177 lat (msec) : 10=22.78%, 20=19.98%, 50=11.97%, 100=36.21%, 250=8.70% 00:13:35.177 lat (msec) : 500=0.35% 00:13:35.177 cpu : usr=0.65%, sys=0.24%, ctx=2840, majf=0, minf=3 00:13:35.177 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.177 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.177 issued rwts: total=800,912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.177 job91: (groupid=0, jobs=1): err= 0: pid=85409: Tue Jul 23 22:15:07 2024 00:13:35.177 read: IOPS=90, BW=11.3MiB/s (11.9MB/s)(100MiB/8819msec) 00:13:35.177 slat (usec): min=5, max=1027, avg=42.12, stdev=89.48 00:13:35.177 clat (usec): min=6474, max=50078, avg=13908.79, stdev=6680.19 00:13:35.177 lat (usec): min=6492, max=50116, avg=13950.91, stdev=6675.44 00:13:35.177 clat percentiles (usec): 00:13:35.177 | 1.00th=[ 6915], 5.00th=[ 7242], 10.00th=[ 7701], 20.00th=[ 8586], 00:13:35.177 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[11600], 60.00th=[14091], 00:13:35.177 | 70.00th=[15926], 80.00th=[18744], 90.00th=[21103], 95.00th=[25822], 00:13:35.177 | 99.00th=[40109], 99.50th=[42206], 99.90th=[50070], 99.95th=[50070], 00:13:35.177 | 99.99th=[50070] 00:13:35.177 write: IOPS=105, BW=13.1MiB/s (13.8MB/s)(113MiB/8617msec); 0 zone resets 00:13:35.177 slat (usec): min=31, max=6606, avg=141.92, stdev=307.56 00:13:35.177 clat (msec): min=38, max=257, avg=75.32, stdev=34.81 00:13:35.177 lat (msec): min=38, max=257, avg=75.46, stdev=34.80 00:13:35.177 clat percentiles (msec): 00:13:35.177 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 52], 00:13:35.177 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 71], 00:13:35.177 | 70.00th=[ 81], 80.00th=[ 92], 90.00th=[ 112], 95.00th=[ 142], 00:13:35.177 | 99.00th=[ 226], 99.50th=[ 234], 99.90th=[ 257], 99.95th=[ 257], 00:13:35.177 | 99.99th=[ 257] 00:13:35.177 bw ( KiB/s): min= 2810, max=19712, per=0.80%, avg=11500.00, stdev=5700.35, samples=19 00:13:35.177 iops : min= 21, max= 154, avg=89.68, stdev=44.62, samples=19 00:13:35.177 lat (msec) : 10=16.53%, 20=24.74%, 50=13.31%, 100=38.51%, 250=6.86% 00:13:35.177 lat (msec) : 500=0.06% 00:13:35.177 cpu : usr=0.63%, sys=0.28%, ctx=2760, majf=0, minf=3 00:13:35.178 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.178 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.178 issued rwts: total=800,906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.178 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.178 job92: (groupid=0, jobs=1): err= 0: pid=85410: Tue Jul 23 22:15:07 2024 00:13:35.178 read: IOPS=81, BW=10.2MiB/s (10.7MB/s)(80.0MiB/7847msec) 00:13:35.178 slat (usec): min=5, max=2969, avg=76.50, stdev=206.50 00:13:35.178 clat (msec): min=2, max=228, avg=15.57, stdev=24.95 00:13:35.178 lat (msec): min=3, max=228, avg=15.65, stdev=24.95 00:13:35.178 clat percentiles (msec): 00:13:35.178 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:13:35.178 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:13:35.178 | 70.00th=[ 14], 80.00th=[ 15], 90.00th=[ 20], 95.00th=[ 26], 00:13:35.178 | 99.00th=[ 153], 99.50th=[ 226], 99.90th=[ 228], 99.95th=[ 228], 00:13:35.178 | 99.99th=[ 228] 00:13:35.178 write: IOPS=89, BW=11.1MiB/s (11.7MB/s)(97.6MiB/8760msec); 0 zone resets 00:13:35.178 slat (usec): min=35, max=4856, avg=137.15, stdev=306.47 00:13:35.178 clat (msec): min=31, max=324, avg=89.26, stdev=34.77 00:13:35.178 lat (msec): min=31, max=324, avg=89.40, stdev=34.79 00:13:35.178 clat percentiles (msec): 00:13:35.178 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 53], 20.00th=[ 60], 00:13:35.178 | 30.00th=[ 65], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 94], 00:13:35.178 | 70.00th=[ 101], 80.00th=[ 111], 90.00th=[ 132], 95.00th=[ 155], 00:13:35.178 | 99.00th=[ 207], 99.50th=[ 224], 99.90th=[ 326], 99.95th=[ 326], 00:13:35.178 | 99.99th=[ 326] 00:13:35.178 bw ( KiB/s): min= 2788, max=15238, per=0.67%, avg=9719.89, stdev=3310.26, samples=19 00:13:35.178 iops : min= 21, max= 119, avg=75.42, stdev=26.01, samples=19 00:13:35.178 lat (msec) : 4=0.56%, 10=20.55%, 20=19.42%, 50=5.77%, 100=36.38% 00:13:35.178 lat (msec) : 250=17.17%, 500=0.14% 00:13:35.178 cpu : usr=0.50%, sys=0.24%, ctx=2431, majf=0, minf=11 00:13:35.178 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.178 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.178 issued rwts: total=640,781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.178 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.178 job93: (groupid=0, jobs=1): err= 0: pid=85411: Tue Jul 23 22:15:07 2024 00:13:35.178 read: IOPS=91, BW=11.5MiB/s (12.0MB/s)(100MiB/8710msec) 00:13:35.178 slat (usec): min=5, max=1953, avg=48.37, stdev=105.79 00:13:35.178 clat (usec): min=4844, max=89216, avg=15366.11, stdev=11176.80 00:13:35.178 lat (usec): min=4857, max=89247, avg=15414.48, stdev=11177.77 00:13:35.178 clat percentiles (usec): 00:13:35.178 | 1.00th=[ 5211], 5.00th=[ 6063], 10.00th=[ 6849], 20.00th=[ 8225], 00:13:35.178 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11863], 60.00th=[13698], 00:13:35.178 | 70.00th=[17171], 80.00th=[19530], 90.00th=[25822], 95.00th=[35390], 00:13:35.178 | 99.00th=[65799], 99.50th=[76022], 99.90th=[89654], 99.95th=[89654], 00:13:35.178 | 99.99th=[89654] 00:13:35.178 write: IOPS=99, BW=12.4MiB/s (13.0MB/s)(106MiB/8511msec); 0 zone resets 00:13:35.178 slat (usec): min=37, max=37575, avg=193.65, stdev=1330.06 00:13:35.178 clat (msec): min=5, max=307, avg=79.37, stdev=37.14 00:13:35.178 lat (msec): min=7, max=307, avg=79.56, stdev=37.07 00:13:35.178 clat percentiles (msec): 00:13:35.178 | 1.00th=[ 12], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 54], 00:13:35.178 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 77], 00:13:35.178 | 70.00th=[ 89], 80.00th=[ 101], 90.00th=[ 118], 95.00th=[ 155], 00:13:35.178 | 99.00th=[ 234], 99.50th=[ 271], 99.90th=[ 309], 99.95th=[ 309], 00:13:35.178 | 99.99th=[ 309] 00:13:35.178 bw ( KiB/s): min= 2304, max=19712, per=0.74%, avg=10709.30, stdev=5215.05, samples=20 00:13:35.178 iops : min= 18, max= 154, avg=83.55, stdev=40.67, samples=20 00:13:35.178 lat (msec) : 10=16.42%, 20=23.78%, 50=12.96%, 100=36.37%, 250=10.04% 00:13:35.178 lat (msec) : 500=0.43% 00:13:35.178 cpu : usr=0.56%, sys=0.30%, ctx=2813, majf=0, minf=7 00:13:35.178 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.178 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.178 issued rwts: total=800,844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.178 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.178 job94: (groupid=0, jobs=1): err= 0: pid=85412: Tue Jul 23 22:15:07 2024 00:13:35.178 read: IOPS=89, BW=11.2MiB/s (11.7MB/s)(100MiB/8967msec) 00:13:35.178 slat (usec): min=5, max=587, avg=33.67, stdev=61.40 00:13:35.178 clat (usec): min=5414, max=76270, avg=13348.54, stdev=8139.10 00:13:35.178 lat (usec): min=5437, max=76278, avg=13382.21, stdev=8139.02 00:13:35.178 clat percentiles (usec): 00:13:35.178 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 7242], 20.00th=[ 8225], 00:13:35.178 | 30.00th=[ 8848], 40.00th=[10028], 50.00th=[11207], 60.00th=[12780], 00:13:35.178 | 70.00th=[15008], 80.00th=[16909], 90.00th=[20579], 95.00th=[22938], 00:13:35.178 | 99.00th=[53216], 99.50th=[69731], 99.90th=[76022], 99.95th=[76022], 00:13:35.178 | 99.99th=[76022] 00:13:35.178 write: IOPS=103, BW=13.0MiB/s (13.6MB/s)(113MiB/8683msec); 0 zone resets 00:13:35.178 slat (usec): min=30, max=2833, avg=134.57, stdev=199.47 00:13:35.178 clat (msec): min=2, max=280, avg=76.42, stdev=34.32 00:13:35.178 lat (msec): min=2, max=280, avg=76.56, stdev=34.32 00:13:35.178 clat percentiles (msec): 00:13:35.178 | 1.00th=[ 7], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 53], 00:13:35.178 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 75], 00:13:35.178 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 117], 95.00th=[ 134], 00:13:35.178 | 99.00th=[ 207], 99.50th=[ 271], 99.90th=[ 279], 99.95th=[ 279], 00:13:35.178 | 99.99th=[ 279] 00:13:35.178 bw ( KiB/s): min= 1792, max=23599, per=0.78%, avg=11276.68, stdev=5590.38, samples=19 00:13:35.178 iops : min= 14, max= 184, avg=87.89, stdev=43.78, samples=19 00:13:35.178 lat (msec) : 4=0.12%, 10=18.98%, 20=23.62%, 50=11.52%, 100=36.66% 00:13:35.178 lat (msec) : 250=8.70%, 500=0.41% 00:13:35.178 cpu : usr=0.63%, sys=0.29%, ctx=2671, majf=0, minf=5 00:13:35.178 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.178 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.178 issued rwts: total=800,902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.178 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.178 job95: (groupid=0, jobs=1): err= 0: pid=85413: Tue Jul 23 22:15:07 2024 00:13:35.178 read: IOPS=90, BW=11.3MiB/s (11.8MB/s)(100MiB/8848msec) 00:13:35.178 slat (usec): min=5, max=1983, avg=44.78, stdev=101.32 00:13:35.178 clat (usec): min=5149, max=67981, avg=14859.12, stdev=8689.46 00:13:35.178 lat (usec): min=5164, max=68176, avg=14903.91, stdev=8685.23 00:13:35.178 clat percentiles (usec): 00:13:35.178 | 1.00th=[ 5669], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 7504], 00:13:35.178 | 30.00th=[ 9241], 40.00th=[11731], 50.00th=[13304], 60.00th=[14222], 00:13:35.178 | 70.00th=[17695], 80.00th=[19530], 90.00th=[22938], 95.00th=[31589], 00:13:35.178 | 99.00th=[45876], 99.50th=[61604], 99.90th=[67634], 99.95th=[67634], 00:13:35.178 | 99.99th=[67634] 00:13:35.178 write: IOPS=99, BW=12.5MiB/s (13.1MB/s)(107MiB/8540msec); 0 zone resets 00:13:35.178 slat (usec): min=32, max=18644, avg=170.26, stdev=699.30 00:13:35.178 clat (msec): min=32, max=367, avg=79.14, stdev=39.87 00:13:35.178 lat (msec): min=33, max=367, avg=79.31, stdev=39.84 00:13:35.178 clat percentiles (msec): 00:13:35.178 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 52], 00:13:35.178 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 73], 00:13:35.178 | 70.00th=[ 85], 80.00th=[ 99], 90.00th=[ 122], 95.00th=[ 161], 00:13:35.178 | 99.00th=[ 247], 99.50th=[ 257], 99.90th=[ 368], 99.95th=[ 368], 00:13:35.178 | 99.99th=[ 368] 00:13:35.178 bw ( KiB/s): min= 2043, max=18688, per=0.75%, avg=10775.42, stdev=5710.41, samples=19 00:13:35.178 iops : min= 15, max= 146, avg=84.00, stdev=44.73, samples=19 00:13:35.178 lat (msec) : 10=15.73%, 20=24.14%, 50=15.73%, 100=35.03%, 250=9.01% 00:13:35.178 lat (msec) : 500=0.36% 00:13:35.178 cpu : usr=0.60%, sys=0.26%, ctx=2762, majf=0, minf=1 00:13:35.178 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.178 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.178 issued rwts: total=800,853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.178 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.179 job96: (groupid=0, jobs=1): err= 0: pid=85414: Tue Jul 23 22:15:07 2024 00:13:35.179 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(100MiB/8759msec) 00:13:35.179 slat (usec): min=5, max=1115, avg=44.32, stdev=91.64 00:13:35.179 clat (msec): min=5, max=142, avg=15.93, stdev=14.71 00:13:35.179 lat (msec): min=5, max=142, avg=15.97, stdev=14.70 00:13:35.179 clat percentiles (msec): 00:13:35.179 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:13:35.179 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 15], 00:13:35.179 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 28], 95.00th=[ 36], 00:13:35.179 | 99.00th=[ 54], 99.50th=[ 138], 99.90th=[ 142], 99.95th=[ 142], 00:13:35.179 | 99.99th=[ 142] 00:13:35.179 write: IOPS=103, BW=12.9MiB/s (13.5MB/s)(109MiB/8419msec); 0 zone resets 00:13:35.179 slat (usec): min=30, max=12028, avg=156.56, stdev=448.97 00:13:35.179 clat (msec): min=26, max=268, avg=76.56, stdev=31.72 00:13:35.179 lat (msec): min=26, max=268, avg=76.71, stdev=31.71 00:13:35.179 clat percentiles (msec): 00:13:35.179 | 1.00th=[ 42], 5.00th=[ 50], 10.00th=[ 50], 20.00th=[ 54], 00:13:35.179 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 74], 00:13:35.179 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 114], 95.00th=[ 136], 00:13:35.179 | 99.00th=[ 205], 99.50th=[ 222], 99.90th=[ 271], 99.95th=[ 271], 00:13:35.179 | 99.99th=[ 271] 00:13:35.179 bw ( KiB/s): min= 2048, max=19456, per=0.76%, avg=10881.11, stdev=5339.57, samples=19 00:13:35.179 iops : min= 16, max= 152, avg=84.74, stdev=41.94, samples=19 00:13:35.179 lat (msec) : 10=18.38%, 20=19.28%, 50=16.65%, 100=36.77%, 250=8.86% 00:13:35.179 lat (msec) : 500=0.06% 00:13:35.179 cpu : usr=0.62%, sys=0.29%, ctx=2714, majf=0, minf=3 00:13:35.179 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.179 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.179 issued rwts: total=800,870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.179 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.179 job97: (groupid=0, jobs=1): err= 0: pid=85415: Tue Jul 23 22:15:07 2024 00:13:35.179 read: IOPS=88, BW=11.1MiB/s (11.7MB/s)(95.2MiB/8564msec) 00:13:35.179 slat (usec): min=5, max=843, avg=48.40, stdev=80.56 00:13:35.179 clat (usec): min=3976, max=93033, avg=15054.41, stdev=9495.32 00:13:35.179 lat (usec): min=3989, max=93039, avg=15102.81, stdev=9495.13 00:13:35.179 clat percentiles (usec): 00:13:35.179 | 1.00th=[ 4293], 5.00th=[ 6194], 10.00th=[ 6849], 20.00th=[ 8979], 00:13:35.179 | 30.00th=[10683], 40.00th=[11731], 50.00th=[13435], 60.00th=[15270], 00:13:35.179 | 70.00th=[16909], 80.00th=[19268], 90.00th=[21890], 95.00th=[28181], 00:13:35.179 | 99.00th=[66323], 99.50th=[87557], 99.90th=[92799], 99.95th=[92799], 00:13:35.179 | 99.99th=[92799] 00:13:35.179 write: IOPS=93, BW=11.7MiB/s (12.3MB/s)(100MiB/8553msec); 0 zone resets 00:13:35.179 slat (usec): min=37, max=1408, avg=126.57, stdev=163.28 00:13:35.179 clat (msec): min=41, max=312, avg=84.79, stdev=36.35 00:13:35.179 lat (msec): min=41, max=312, avg=84.92, stdev=36.36 00:13:35.179 clat percentiles (msec): 00:13:35.179 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 55], 00:13:35.179 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 79], 60.00th=[ 89], 00:13:35.179 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 124], 95.00th=[ 155], 00:13:35.179 | 99.00th=[ 230], 99.50th=[ 243], 99.90th=[ 313], 99.95th=[ 313], 00:13:35.179 | 99.99th=[ 313] 00:13:35.179 bw ( KiB/s): min= 1277, max=18176, per=0.71%, avg=10222.21, stdev=4884.03, samples=19 00:13:35.179 iops : min= 9, max= 142, avg=79.68, stdev=38.32, samples=19 00:13:35.179 lat (msec) : 4=0.19%, 10=12.80%, 20=27.91%, 50=11.52%, 100=35.08% 00:13:35.179 lat (msec) : 250=12.23%, 500=0.26% 00:13:35.179 cpu : usr=0.54%, sys=0.28%, ctx=2703, majf=0, minf=5 00:13:35.179 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.179 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.179 issued rwts: total=762,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.179 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.179 job98: (groupid=0, jobs=1): err= 0: pid=85416: Tue Jul 23 22:15:07 2024 00:13:35.179 read: IOPS=79, BW=9.99MiB/s (10.5MB/s)(80.0MiB/8011msec) 00:13:35.179 slat (usec): min=5, max=1696, avg=50.09, stdev=119.60 00:13:35.179 clat (msec): min=3, max=294, avg=19.57, stdev=35.60 00:13:35.179 lat (msec): min=3, max=294, avg=19.62, stdev=35.59 00:13:35.179 clat percentiles (msec): 00:13:35.179 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:13:35.179 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:13:35.179 | 70.00th=[ 14], 80.00th=[ 18], 90.00th=[ 32], 95.00th=[ 50], 00:13:35.179 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:13:35.179 | 99.99th=[ 296] 00:13:35.179 write: IOPS=91, BW=11.4MiB/s (12.0MB/s)(96.2MiB/8438msec); 0 zone resets 00:13:35.179 slat (usec): min=26, max=1744, avg=125.29, stdev=162.65 00:13:35.179 clat (msec): min=36, max=304, avg=87.22, stdev=36.62 00:13:35.179 lat (msec): min=37, max=304, avg=87.35, stdev=36.63 00:13:35.179 clat percentiles (msec): 00:13:35.179 | 1.00th=[ 47], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 58], 00:13:35.179 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 79], 60.00th=[ 87], 00:13:35.179 | 70.00th=[ 96], 80.00th=[ 110], 90.00th=[ 136], 95.00th=[ 165], 00:13:35.179 | 99.00th=[ 199], 99.50th=[ 241], 99.90th=[ 305], 99.95th=[ 305], 00:13:35.179 | 99.99th=[ 305] 00:13:35.179 bw ( KiB/s): min= 3548, max=17304, per=0.70%, avg=10090.22, stdev=3902.81, samples=18 00:13:35.179 iops : min= 27, max= 135, avg=78.28, stdev=30.56, samples=18 00:13:35.179 lat (msec) : 4=0.07%, 10=17.66%, 20=19.93%, 50=7.73%, 100=39.43% 00:13:35.179 lat (msec) : 250=14.47%, 500=0.71% 00:13:35.179 cpu : usr=0.52%, sys=0.22%, ctx=2382, majf=0, minf=11 00:13:35.179 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.179 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.179 issued rwts: total=640,770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.179 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.179 job99: (groupid=0, jobs=1): err= 0: pid=85417: Tue Jul 23 22:15:07 2024 00:13:35.179 read: IOPS=87, BW=11.0MiB/s (11.5MB/s)(92.2MiB/8422msec) 00:13:35.179 slat (usec): min=5, max=2164, avg=60.20, stdev=137.55 00:13:35.179 clat (msec): min=3, max=114, avg=16.17, stdev=14.12 00:13:35.179 lat (msec): min=3, max=114, avg=16.23, stdev=14.12 00:13:35.179 clat percentiles (msec): 00:13:35.179 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:13:35.179 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 14], 00:13:35.179 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 28], 95.00th=[ 41], 00:13:35.179 | 99.00th=[ 84], 99.50th=[ 107], 99.90th=[ 115], 99.95th=[ 115], 00:13:35.179 | 99.99th=[ 115] 00:13:35.179 write: IOPS=94, BW=11.8MiB/s (12.3MB/s)(100MiB/8506msec); 0 zone resets 00:13:35.179 slat (usec): min=37, max=35552, avg=196.81, stdev=1416.93 00:13:35.179 clat (msec): min=18, max=318, avg=83.88, stdev=36.28 00:13:35.179 lat (msec): min=21, max=319, avg=84.07, stdev=36.18 00:13:35.179 clat percentiles (msec): 00:13:35.179 | 1.00th=[ 26], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 57], 00:13:35.179 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 74], 60.00th=[ 85], 00:13:35.179 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 125], 95.00th=[ 150], 00:13:35.179 | 99.00th=[ 215], 99.50th=[ 271], 99.90th=[ 321], 99.95th=[ 321], 00:13:35.179 | 99.99th=[ 321] 00:13:35.179 bw ( KiB/s): min= 2048, max=17408, per=0.71%, avg=10239.45, stdev=4944.39, samples=20 00:13:35.179 iops : min= 16, max= 136, avg=79.90, stdev=38.65, samples=20 00:13:35.179 lat (msec) : 4=0.07%, 10=16.58%, 20=22.24%, 50=11.12%, 100=37.78% 00:13:35.179 lat (msec) : 250=11.83%, 500=0.39% 00:13:35.179 cpu : usr=0.52%, sys=0.28%, ctx=2731, majf=0, minf=7 00:13:35.179 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.179 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.179 issued rwts: total=738,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.179 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.179 00:13:35.179 Run status group 0 (all jobs): 00:13:35.179 READ: bw=1240MiB/s (1300MB/s), 9474KiB/s-17.6MiB/s (9701kB/s-18.5MB/s), io=11.4GiB (12.3GB), run=7847-9446msec 00:13:35.179 WRITE: bw=1407MiB/s (1475MB/s), 11.1MiB/s-19.5MiB/s (11.7MB/s-20.4MB/s), io=12.4GiB (13.3GB), run=8076-9051msec 00:13:35.179 00:13:35.179 Disk stats (read/write): 00:13:35.179 sdb: ios=873/934, merge=0/0, ticks=12925/63871, in_queue=76796, util=71.47% 00:13:35.179 sdf: ios=836/833, merge=0/0, ticks=9563/66664, in_queue=76228, util=71.45% 00:13:35.179 sdj: ios=802/851, merge=0/0, ticks=10630/65228, in_queue=75859, util=71.92% 00:13:35.179 sdq: ios=802/954, merge=0/0, ticks=8942/68170, in_queue=77113, util=72.68% 00:13:35.179 sdv: ios=801/801, merge=0/0, ticks=13781/62692, in_queue=76473, util=72.82% 00:13:35.179 sdab: ios=640/796, merge=0/0, ticks=7910/68903, in_queue=76813, util=72.75% 00:13:35.179 sdae: ios=719/800, merge=0/0, ticks=10247/66546, in_queue=76794, util=73.19% 00:13:35.179 sdak: ios=803/896, merge=0/0, ticks=10671/64872, in_queue=75544, util=73.49% 00:13:35.179 sdat: ios=802/949, merge=0/0, ticks=8553/67581, in_queue=76134, util=73.97% 00:13:35.179 sdaz: ios=802/925, merge=0/0, ticks=10947/65276, in_queue=76224, util=74.36% 00:13:35.179 sdd: ios=1122/1171, merge=0/0, ticks=9972/66492, in_queue=76465, util=74.40% 00:13:35.179 sdk: ios=1121/1129, merge=0/0, ticks=11708/64632, in_queue=76340, util=74.43% 00:13:35.179 sdp: ios=1282/1284, merge=0/0, ticks=12537/63725, in_queue=76263, util=74.95% 00:13:35.179 sdw: ios=1122/1275, merge=0/0, ticks=10141/65887, in_queue=76029, util=74.99% 00:13:35.179 sdad: ios=1122/1209, merge=0/0, ticks=10696/64582, in_queue=75279, util=75.14% 00:13:35.179 sdaj: ios=1208/1280, merge=0/0, ticks=11645/65778, in_queue=77423, util=76.15% 00:13:35.179 sdap: ios=1318/1315, merge=0/0, ticks=9705/67299, in_queue=77004, util=76.39% 00:13:35.179 sdaw: ios=1184/1280, merge=0/0, ticks=11356/65024, in_queue=76380, util=76.21% 00:13:35.179 sdbd: ios=1122/1183, merge=0/0, ticks=11522/64478, in_queue=76001, util=76.85% 00:13:35.179 sdbj: ios=1156/1275, merge=0/0, ticks=10355/66377, in_queue=76732, util=77.19% 00:13:35.179 sdi: ios=1121/1121, merge=0/0, ticks=12966/62652, in_queue=75619, util=77.14% 00:13:35.179 sdn: ios=1154/1277, merge=0/0, ticks=11116/65664, in_queue=76781, util=77.72% 00:13:35.179 sdt: ios=1158/1209, merge=0/0, ticks=10814/65521, in_queue=76335, util=77.68% 00:13:35.180 sdz: ios=1122/1169, merge=0/0, ticks=11640/64613, in_queue=76254, util=77.87% 00:13:35.180 sdah: ios=1122/1263, merge=0/0, ticks=9349/66855, in_queue=76205, util=78.47% 00:13:35.180 sdan: ios=1291/1280, merge=0/0, ticks=11053/65344, in_queue=76397, util=78.67% 00:13:35.180 sdau: ios=1282/1280, merge=0/0, ticks=10187/65387, in_queue=75575, util=78.41% 00:13:35.180 sdbc: ios=1122/1138, merge=0/0, ticks=13375/63221, in_queue=76596, util=78.64% 00:13:35.180 sdbi: ios=1122/1225, merge=0/0, ticks=8924/66991, in_queue=75915, util=78.71% 00:13:35.180 sdbn: ios=1122/1137, merge=0/0, ticks=14746/61700, in_queue=76446, util=79.02% 00:13:35.180 sdg: ios=802/854, merge=0/0, ticks=9712/66414, in_queue=76127, util=78.95% 00:13:35.180 sdm: ios=802/888, merge=0/0, ticks=8649/68088, in_queue=76737, util=79.38% 00:13:35.180 sds: ios=838/898, merge=0/0, ticks=10478/67008, in_queue=77486, util=79.90% 00:13:35.180 sdy: ios=737/800, merge=0/0, ticks=13639/62602, in_queue=76241, util=78.67% 00:13:35.180 sdac: ios=802/913, merge=0/0, ticks=9386/67495, in_queue=76881, util=79.55% 00:13:35.180 sdai: ios=640/756, merge=0/0, ticks=10730/66401, in_queue=77132, util=79.69% 00:13:35.180 sdao: ios=802/867, merge=0/0, ticks=11935/64508, in_queue=76444, util=80.29% 00:13:35.180 sdar: ios=652/800, merge=0/0, ticks=13691/62379, in_queue=76071, util=80.30% 00:13:35.180 sday: ios=802/856, merge=0/0, ticks=11509/64925, in_queue=76434, util=81.12% 00:13:35.180 sdbf: ios=641/757, merge=0/0, ticks=13178/63641, in_queue=76819, util=81.06% 00:13:35.180 sdo: ios=802/880, merge=0/0, ticks=12166/64199, in_queue=76365, util=81.24% 00:13:35.180 sdu: ios=648/800, merge=0/0, ticks=11261/65716, in_queue=76978, util=81.31% 00:13:35.180 sdaa: ios=802/899, merge=0/0, ticks=10213/65938, in_queue=76152, util=81.74% 00:13:35.180 sdag: ios=802/913, merge=0/0, ticks=9637/66532, in_queue=76169, util=81.36% 00:13:35.180 sdam: ios=802/938, merge=0/0, ticks=11103/65164, in_queue=76267, util=81.71% 00:13:35.180 sdaq: ios=801/800, merge=0/0, ticks=9288/67218, in_queue=76506, util=81.79% 00:13:35.180 sdav: ios=802/939, merge=0/0, ticks=11565/65386, in_queue=76951, util=82.32% 00:13:35.180 sdbb: ios=801/803, merge=0/0, ticks=13736/62290, in_queue=76026, util=81.80% 00:13:35.180 sdbg: ios=800/844, merge=0/0, ticks=7982/68761, in_queue=76743, util=82.18% 00:13:35.180 sdbm: ios=802/932, merge=0/0, ticks=9633/66662, in_queue=76295, util=82.82% 00:13:35.180 sdax: ios=1122/1166, merge=0/0, ticks=8228/68470, in_queue=76698, util=82.82% 00:13:35.180 sdbe: ios=1122/1251, merge=0/0, ticks=11870/64273, in_queue=76143, util=83.31% 00:13:35.180 sdbk: ios=1122/1192, merge=0/0, ticks=10279/66162, in_queue=76441, util=82.93% 00:13:35.180 sdbo: ios=1122/1187, merge=0/0, ticks=9908/66861, in_queue=76770, util=83.57% 00:13:35.180 sdbq: ios=1282/1280, merge=0/0, ticks=12750/63263, in_queue=76014, util=83.75% 00:13:35.180 sdbr: ios=1122/1218, merge=0/0, ticks=9344/67398, in_queue=76743, util=83.98% 00:13:35.180 sdbt: ios=1166/1280, merge=0/0, ticks=12090/64978, in_queue=77068, util=84.72% 00:13:35.180 sdbv: ios=1282/1280, merge=0/0, ticks=11378/64474, in_queue=75852, util=84.46% 00:13:35.180 sdby: ios=1122/1185, merge=0/0, ticks=8085/68185, in_queue=76270, util=84.64% 00:13:35.180 sdca: ios=1321/1282, merge=0/0, ticks=13461/63189, in_queue=76650, util=85.60% 00:13:35.180 sdba: ios=1122/1264, merge=0/0, ticks=10391/65702, in_queue=76093, util=85.46% 00:13:35.180 sdbh: ios=1122/1167, merge=0/0, ticks=10377/65763, in_queue=76140, util=85.65% 00:13:35.180 sdbl: ios=1158/1221, merge=0/0, ticks=12461/64830, in_queue=77292, util=86.17% 00:13:35.180 sdbp: ios=1122/1146, merge=0/0, ticks=7368/69091, in_queue=76459, util=86.18% 00:13:35.180 sdbs: ios=1122/1160, merge=0/0, ticks=11477/65042, in_queue=76520, util=86.63% 00:13:35.180 sdbu: ios=1122/1228, merge=0/0, ticks=13325/62564, in_queue=75890, util=86.62% 00:13:35.180 sdbw: ios=1122/1159, merge=0/0, ticks=9753/66733, in_queue=76487, util=87.03% 00:13:35.180 sdbx: ios=1166/1280, merge=0/0, ticks=10354/66660, in_queue=77015, util=87.35% 00:13:35.180 sdbz: ios=1097/1120, merge=0/0, ticks=12566/64151, in_queue=76717, util=87.04% 00:13:35.180 sdcb: ios=1316/1286, merge=0/0, ticks=9930/67148, in_queue=77078, util=88.18% 00:13:35.180 sdcc: ios=801/828, merge=0/0, ticks=8630/67244, in_queue=75875, util=87.97% 00:13:35.180 sdcg: ios=640/730, merge=0/0, ticks=13086/63648, in_queue=76734, util=87.97% 00:13:35.180 sdci: ios=648/800, merge=0/0, ticks=8812/67325, in_queue=76138, util=88.26% 00:13:35.180 sdcl: ios=801/853, merge=0/0, ticks=9407/66256, in_queue=75663, util=88.35% 00:13:35.180 sdcm: ios=718/800, merge=0/0, ticks=11166/65134, in_queue=76300, util=89.04% 00:13:35.180 sdcn: ios=801/817, merge=0/0, ticks=9213/66666, in_queue=75880, util=89.32% 00:13:35.180 sdcp: ios=801/849, merge=0/0, ticks=9980/65833, in_queue=75814, util=89.65% 00:13:35.180 sdcr: ios=802/878, merge=0/0, ticks=11700/64297, in_queue=75997, util=89.81% 00:13:35.180 sdct: ios=802/891, merge=0/0, ticks=11399/64641, in_queue=76041, util=90.49% 00:13:35.180 sdcv: ios=802/873, merge=0/0, ticks=11879/64269, in_queue=76149, util=90.51% 00:13:35.180 sdcd: ios=752/800, merge=0/0, ticks=9295/67645, in_queue=76940, util=90.95% 00:13:35.180 sdce: ios=802/943, merge=0/0, ticks=9904/67126, in_queue=77030, util=91.56% 00:13:35.180 sdcf: ios=802/928, merge=0/0, ticks=12178/64015, in_queue=76194, util=91.63% 00:13:35.180 sdch: ios=643/800, merge=0/0, ticks=9211/67703, in_queue=76915, util=91.86% 00:13:35.180 sdcj: ios=802/883, merge=0/0, ticks=10225/65416, in_queue=75642, util=92.31% 00:13:35.180 sdck: ios=802/938, merge=0/0, ticks=8451/68364, in_queue=76815, util=92.82% 00:13:35.180 sdco: ios=801/803, merge=0/0, ticks=9770/66444, in_queue=76214, util=92.87% 00:13:35.180 sdcq: ios=801/814, merge=0/0, ticks=10243/66262, in_queue=76505, util=93.23% 00:13:35.180 sdcs: ios=801/870, merge=0/0, ticks=8915/67409, in_queue=76324, util=93.09% 00:13:35.180 sdcu: ios=801/919, merge=0/0, ticks=10593/65566, in_queue=76159, util=93.54% 00:13:35.180 sda: ios=802/896, merge=0/0, ticks=9063/67399, in_queue=76463, util=93.91% 00:13:35.180 sdc: ios=801/883, merge=0/0, ticks=10870/64705, in_queue=75576, util=94.52% 00:13:35.180 sde: ios=640/757, merge=0/0, ticks=9754/67148, in_queue=76902, util=94.92% 00:13:35.180 sdh: ios=801/828, merge=0/0, ticks=12043/63311, in_queue=75355, util=95.79% 00:13:35.180 sdl: ios=802/884, merge=0/0, ticks=10423/66111, in_queue=76534, util=95.57% 00:13:35.180 sdr: ios=802/833, merge=0/0, ticks=11510/64025, in_queue=75535, util=96.50% 00:13:35.180 sdx: ios=801/850, merge=0/0, ticks=12412/62973, in_queue=75385, util=96.98% 00:13:35.180 sdaf: ios=666/800, merge=0/0, ticks=9977/66556, in_queue=76534, util=97.56% 00:13:35.180 sdal: ios=640/746, merge=0/0, ticks=12344/64623, in_queue=76968, util=97.81% 00:13:35.180 sdas: ios=683/800, merge=0/0, ticks=10819/65892, in_queue=76712, util=97.80% 00:13:35.180 [2024-07-23 22:15:07.218234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.223190] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.224585] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.225981] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.228345] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.230491] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.232165] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.234448] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.235979] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@78 -- # timing_exit fio 00:13:35.180 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:35.180 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:35.180 [2024-07-23 22:15:07.237897] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.239704] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.245339] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.246735] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.248733] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.250916] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.252415] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.255228] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.256787] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.258291] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.260335] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.262127] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.263749] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.265665] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.268518] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.270448] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@80 -- # rm -f ./local-job0-0-verify.state 00:13:35.180 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:13:35.180 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@83 -- # rm -f 00:13:35.180 [2024-07-23 22:15:07.274996] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 Cleaning up iSCSI connection 00:13:35.180 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@84 -- # iscsicleanup 00:13:35.180 [2024-07-23 22:15:07.276853] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:13:35.180 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:13:35.180 [2024-07-23 22:15:07.278595] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.280234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.282284] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.283831] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.289426] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.290834] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.292807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.300596] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.306474] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.180 [2024-07-23 22:15:07.308776] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.181 [2024-07-23 22:15:07.310737] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.181 [2024-07-23 22:15:07.312278] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.181 [2024-07-23 22:15:07.313755] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.181 [2024-07-23 22:15:07.317314] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.181 [2024-07-23 22:15:07.319821] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.181 [2024-07-23 22:15:07.322823] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.181 [2024-07-23 22:15:07.328639] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.181 [2024-07-23 22:15:07.332592] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.181 [2024-07-23 22:15:07.334087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.181 [2024-07-23 22:15:07.336174] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.181 [2024-07-23 22:15:07.338803] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.181 [2024-07-23 22:15:07.342922] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.440 [2024-07-23 22:15:07.345100] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.440 [2024-07-23 22:15:07.346575] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.440 [2024-07-23 22:15:07.348440] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.440 [2024-07-23 22:15:07.351420] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.354023] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.356408] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.359111] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.360664] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.363511] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.366167] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.384540] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.387352] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.390833] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.394155] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.396306] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.398662] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.400674] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.402691] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.404710] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.406731] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.409039] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.414506] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.417723] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.424283] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.426852] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.429074] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.432066] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.435486] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.439231] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.441832] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.446043] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.448850] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.483729] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.487550] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.491568] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.493875] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.441 [2024-07-23 22:15:07.496570] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:35.701 Logging out of session [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:13:35.701 Logging out of session [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:13:35.701 Logging out of session [sid: 12, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:13:35.701 Logging out of session [sid: 13, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:13:35.701 Logging out of session [sid: 14, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:13:35.701 Logging out of session [sid: 15, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:13:35.701 Logging out of session [sid: 16, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:13:35.701 Logging out of session [sid: 17, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:13:35.701 Logging out of session [sid: 18, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:13:35.701 Logging out of session [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:13:35.701 Logout of [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:13:35.701 Logout of [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:13:35.701 Logout of [sid: 12, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:13:35.701 Logout of [sid: 13, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:13:35.701 Logout of [sid: 14, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:13:35.701 Logout of [sid: 15, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:13:35.701 Logout of [sid: 16, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:13:35.701 Logout of [sid: 17, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:13:35.701 Logout of [sid: 18, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:13:35.701 Logout of [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:13:35.701 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:13:35.701 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@983 -- # rm -rf 00:13:35.701 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@85 -- # killprocess 82433 00:13:35.701 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@948 -- # '[' -z 82433 ']' 00:13:35.701 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@952 -- # kill -0 82433 00:13:35.701 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@953 -- # uname 00:13:35.701 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:35.701 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82433 00:13:35.701 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:35.701 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:35.701 killing process with pid 82433 00:13:35.701 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82433' 00:13:35.701 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@967 -- # kill 82433 00:13:35.701 22:15:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@972 -- # wait 82433 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@86 -- # iscsitestfini 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:13:36.639 00:13:36.639 real 0m50.401s 00:13:36.639 user 3m23.606s 00:13:36.639 sys 0m28.554s 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:36.639 ************************************ 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:36.639 END TEST iscsi_tgt_iscsi_lvol 00:13:36.639 ************************************ 00:13:36.639 22:15:08 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@37 -- # run_test iscsi_tgt_fio /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:13:36.639 22:15:08 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:36.639 22:15:08 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.639 22:15:08 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:13:36.639 ************************************ 00:13:36.639 START TEST iscsi_tgt_fio 00:13:36.639 ************************************ 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:13:36.639 * Looking for test storage... 00:13:36.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:13:36.639 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@11 -- # iscsitestinit 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@48 -- # '[' -z 10.0.0.1 ']' 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@53 -- # '[' -z 10.0.0.2 ']' 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@58 -- # MALLOC_BDEV_SIZE=64 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@59 -- # MALLOC_BLOCK_SIZE=4096 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@60 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@61 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@63 -- # timing_enter start_iscsi_tgt 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@66 -- # pid=87245 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@67 -- # echo 'Process pid: 87245' 00:13:36.640 Process pid: 87245 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@69 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@65 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@71 -- # waitforlisten 87245 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@829 -- # '[' -z 87245 ']' 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.640 22:15:08 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:13:36.899 [2024-07-23 22:15:08.874164] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:13:36.899 [2024-07-23 22:15:08.874234] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87245 ] 00:13:36.899 [2024-07-23 22:15:08.993435] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:36.899 [2024-07-23 22:15:09.002616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.899 [2024-07-23 22:15:09.046159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.836 22:15:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:37.836 22:15:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@862 -- # return 0 00:13:37.836 22:15:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:13:37.836 [2024-07-23 22:15:09.898495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:38.094 iscsi_tgt is listening. Running tests... 00:13:38.094 22:15:10 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@75 -- # echo 'iscsi_tgt is listening. Running tests...' 00:13:38.094 22:15:10 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@77 -- # timing_exit start_iscsi_tgt 00:13:38.094 22:15:10 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:38.094 22:15:10 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:13:38.094 22:15:10 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:13:38.353 22:15:10 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:13:38.611 22:15:10 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:13:38.870 22:15:10 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # malloc_bdevs='Malloc0 ' 00:13:38.870 22:15:10 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:13:39.129 22:15:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # malloc_bdevs+=Malloc1 00:13:39.129 22:15:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:39.388 22:15:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 1024 512 00:13:39.647 22:15:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # bdev=Malloc2 00:13:39.648 22:15:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias 'raid0:0 Malloc2:1' 1:2 64 -d 00:13:39.907 22:15:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@91 -- # sleep 1 00:13:40.844 22:15:12 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@93 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:13:40.844 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:13:40.844 22:15:12 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@94 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:13:40.844 [2024-07-23 22:15:13.003477] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:40.844 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:13:40.844 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:13:40.844 22:15:13 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@95 -- # waitforiscsidevices 2 00:13:40.844 22:15:13 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@116 -- # local num=2 00:13:40.844 22:15:13 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:13:40.844 22:15:13 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:13:40.844 22:15:13 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:13:40.844 [2024-07-23 22:15:13.016632] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:40.844 22:15:13 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:13:40.844 22:15:13 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # n=2 00:13:40.844 22:15:13 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@120 -- # '[' 2 -ne 2 ']' 00:13:40.844 22:15:13 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@123 -- # return 0 00:13:40.844 22:15:13 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@97 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; delete_tmp_files; exit 1' SIGINT SIGTERM EXIT 00:13:40.844 22:15:13 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:13:41.103 [global] 00:13:41.103 thread=1 00:13:41.103 invalidate=1 00:13:41.103 rw=randrw 00:13:41.103 time_based=1 00:13:41.103 runtime=1 00:13:41.103 ioengine=libaio 00:13:41.103 direct=1 00:13:41.103 bs=4096 00:13:41.103 iodepth=1 00:13:41.103 norandommap=0 00:13:41.103 numjobs=1 00:13:41.103 00:13:41.103 verify_dump=1 00:13:41.103 verify_backlog=512 00:13:41.103 verify_state_save=0 00:13:41.103 do_verify=1 00:13:41.103 verify=crc32c-intel 00:13:41.103 [job0] 00:13:41.103 filename=/dev/sda 00:13:41.103 [job1] 00:13:41.103 filename=/dev/sdb 00:13:41.103 queue_depth set to 113 (sda) 00:13:41.103 queue_depth set to 113 (sdb) 00:13:41.103 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:41.103 job1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:41.103 fio-3.35 00:13:41.103 Starting 2 threads 00:13:41.103 [2024-07-23 22:15:13.243627] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:41.103 [2024-07-23 22:15:13.247460] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.482 [2024-07-23 22:15:14.360472] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.482 [2024-07-23 22:15:14.363846] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.482 00:13:42.482 job0: (groupid=0, jobs=1): err= 0: pid=87379: Tue Jul 23 22:15:14 2024 00:13:42.482 read: IOPS=7434, BW=29.0MiB/s (30.5MB/s)(29.1MiB/1001msec) 00:13:42.482 slat (nsec): min=2661, max=30887, avg=5879.67, stdev=1591.85 00:13:42.482 clat (usec): min=51, max=231, avg=80.75, stdev= 7.09 00:13:42.482 lat (usec): min=58, max=262, avg=86.63, stdev= 7.84 00:13:42.482 clat percentiles (usec): 00:13:42.482 | 1.00th=[ 72], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 76], 00:13:42.482 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 82], 00:13:42.482 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 89], 95.00th=[ 94], 00:13:42.482 | 99.00th=[ 104], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 147], 00:13:42.482 | 99.99th=[ 233] 00:13:42.482 bw ( KiB/s): min=15536, max=15536, per=26.14%, avg=15536.00, stdev= 0.00, samples=1 00:13:42.482 iops : min= 3884, max= 3884, avg=3884.00, stdev= 0.00, samples=1 00:13:42.482 write: IOPS=3956, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1001msec); 0 zone resets 00:13:42.482 slat (nsec): min=3733, max=28896, avg=6952.23, stdev=2018.42 00:13:42.482 clat (usec): min=68, max=163, avg=80.80, stdev= 7.65 00:13:42.482 lat (usec): min=72, max=186, avg=87.76, stdev= 8.70 00:13:42.482 clat percentiles (usec): 00:13:42.482 | 1.00th=[ 71], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:13:42.482 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 82], 00:13:42.482 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 89], 95.00th=[ 94], 00:13:42.482 | 99.00th=[ 105], 99.50th=[ 115], 99.90th=[ 149], 99.95th=[ 159], 00:13:42.482 | 99.99th=[ 163] 00:13:42.482 bw ( KiB/s): min=16384, max=16384, per=51.66%, avg=16384.00, stdev= 0.00, samples=1 00:13:42.482 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:13:42.482 lat (usec) : 100=98.23%, 250=1.77% 00:13:42.482 cpu : usr=3.50%, sys=9.40%, ctx=11402, majf=0, minf=13 00:13:42.482 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:42.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.482 issued rwts: total=7442,3960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.482 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:42.482 job1: (groupid=0, jobs=1): err= 0: pid=87381: Tue Jul 23 22:15:14 2024 00:13:42.482 read: IOPS=7422, BW=29.0MiB/s (30.4MB/s)(29.0MiB/1001msec) 00:13:42.482 slat (nsec): min=2511, max=46072, avg=3708.61, stdev=1833.88 00:13:42.482 clat (usec): min=56, max=372, avg=81.59, stdev= 7.43 00:13:42.482 lat (usec): min=65, max=377, avg=85.29, stdev= 8.17 00:13:42.482 clat percentiles (usec): 00:13:42.482 | 1.00th=[ 73], 5.00th=[ 75], 10.00th=[ 76], 20.00th=[ 77], 00:13:42.482 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 82], 00:13:42.482 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 89], 95.00th=[ 94], 00:13:42.482 | 99.00th=[ 103], 99.50th=[ 108], 99.90th=[ 116], 99.95th=[ 147], 00:13:42.482 | 99.99th=[ 371] 00:13:42.482 bw ( KiB/s): min=15624, max=15624, per=26.29%, avg=15624.00, stdev= 0.00, samples=1 00:13:42.482 iops : min= 3906, max= 3906, avg=3906.00, stdev= 0.00, samples=1 00:13:42.482 write: IOPS=3973, BW=15.5MiB/s (16.3MB/s)(15.5MiB/1001msec); 0 zone resets 00:13:42.482 slat (nsec): min=3252, max=27348, avg=4618.21, stdev=2033.21 00:13:42.482 clat (usec): min=65, max=167, avg=85.61, stdev= 6.91 00:13:42.482 lat (usec): min=75, max=173, avg=90.23, stdev= 7.58 00:13:42.482 clat percentiles (usec): 00:13:42.482 | 1.00th=[ 76], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 81], 00:13:42.482 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 87], 00:13:42.482 | 70.00th=[ 89], 80.00th=[ 90], 90.00th=[ 93], 95.00th=[ 98], 00:13:42.482 | 99.00th=[ 110], 99.50th=[ 116], 99.90th=[ 121], 99.95th=[ 143], 00:13:42.482 | 99.99th=[ 167] 00:13:42.482 bw ( KiB/s): min=16384, max=16384, per=51.66%, avg=16384.00, stdev= 0.00, samples=1 00:13:42.482 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:13:42.482 lat (usec) : 100=97.63%, 250=2.36%, 500=0.01% 00:13:42.482 cpu : usr=3.20%, sys=7.20%, ctx=11407, majf=0, minf=7 00:13:42.482 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:42.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.482 issued rwts: total=7430,3977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.482 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:42.482 00:13:42.482 Run status group 0 (all jobs): 00:13:42.482 READ: bw=58.0MiB/s (60.9MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.5MB/s), io=58.1MiB (60.9MB), run=1001-1001msec 00:13:42.482 WRITE: bw=31.0MiB/s (32.5MB/s), 15.5MiB/s-15.5MiB/s (16.2MB/s-16.3MB/s), io=31.0MiB (32.5MB), run=1001-1001msec 00:13:42.482 00:13:42.482 Disk stats (read/write): 00:13:42.482 sda: ios=6550/3544, merge=0/0, ticks=521/277, in_queue=798, util=90.57% 00:13:42.482 sdb: ios=6536/3565, merge=0/0, ticks=504/281, in_queue=786, util=90.94% 00:13:42.482 22:15:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:13:42.482 [global] 00:13:42.482 thread=1 00:13:42.482 invalidate=1 00:13:42.482 rw=randrw 00:13:42.482 time_based=1 00:13:42.482 runtime=1 00:13:42.482 ioengine=libaio 00:13:42.482 direct=1 00:13:42.482 bs=131072 00:13:42.482 iodepth=32 00:13:42.482 norandommap=0 00:13:42.482 numjobs=1 00:13:42.482 00:13:42.482 verify_dump=1 00:13:42.482 verify_backlog=512 00:13:42.482 verify_state_save=0 00:13:42.482 do_verify=1 00:13:42.482 verify=crc32c-intel 00:13:42.482 [job0] 00:13:42.482 filename=/dev/sda 00:13:42.482 [job1] 00:13:42.482 filename=/dev/sdb 00:13:42.482 queue_depth set to 113 (sda) 00:13:42.482 queue_depth set to 113 (sdb) 00:13:42.482 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:13:42.482 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:13:42.482 fio-3.35 00:13:42.482 Starting 2 threads 00:13:42.482 [2024-07-23 22:15:14.592843] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.482 [2024-07-23 22:15:14.596667] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.051 [2024-07-23 22:15:15.224033] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.621 [2024-07-23 22:15:15.720279] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.621 [2024-07-23 22:15:15.724646] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.621 00:13:43.621 job0: (groupid=0, jobs=1): err= 0: pid=87449: Tue Jul 23 22:15:15 2024 00:13:43.621 read: IOPS=2653, BW=332MiB/s (348MB/s)(335MiB/1011msec) 00:13:43.621 slat (nsec): min=6515, max=80254, avg=16036.07, stdev=5721.75 00:13:43.621 clat (usec): min=785, max=13662, avg=2789.38, stdev=1805.94 00:13:43.621 lat (usec): min=800, max=13675, avg=2805.42, stdev=1805.29 00:13:43.621 clat percentiles (usec): 00:13:43.621 | 1.00th=[ 881], 5.00th=[ 979], 10.00th=[ 1057], 20.00th=[ 1139], 00:13:43.621 | 30.00th=[ 1237], 40.00th=[ 1418], 50.00th=[ 2442], 60.00th=[ 3621], 00:13:43.621 | 70.00th=[ 4113], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4883], 00:13:43.621 | 99.00th=[ 9765], 99.50th=[11600], 99.90th=[13304], 99.95th=[13435], 00:13:43.621 | 99.99th=[13698] 00:13:43.621 bw ( KiB/s): min=163328, max=193024, per=30.28%, avg=178176.00, stdev=20998.24, samples=2 00:13:43.621 iops : min= 1276, max= 1508, avg=1392.00, stdev=164.05, samples=2 00:13:43.621 write: IOPS=1495, BW=187MiB/s (196MB/s)(182MiB/971msec); 0 zone resets 00:13:43.621 slat (usec): min=31, max=171, avg=70.31, stdev=15.09 00:13:43.621 clat (usec): min=2268, max=36435, avg=16865.79, stdev=3293.50 00:13:43.621 lat (usec): min=2348, max=36499, avg=16936.10, stdev=3296.86 00:13:43.621 clat percentiles (usec): 00:13:43.621 | 1.00th=[10552], 5.00th=[14091], 10.00th=[14746], 20.00th=[15270], 00:13:43.621 | 30.00th=[15664], 40.00th=[16057], 50.00th=[16319], 60.00th=[16581], 00:13:43.621 | 70.00th=[16909], 80.00th=[17433], 90.00th=[19268], 95.00th=[25297], 00:13:43.621 | 99.00th=[28967], 99.50th=[31851], 99.90th=[34866], 99.95th=[36439], 00:13:43.621 | 99.99th=[36439] 00:13:43.621 bw ( KiB/s): min=171776, max=192256, per=48.13%, avg=182016.00, stdev=14481.55, samples=2 00:13:43.621 iops : min= 1342, max= 1502, avg=1422.00, stdev=113.14, samples=2 00:13:43.621 lat (usec) : 1000=3.68% 00:13:43.621 lat (msec) : 2=26.31%, 4=13.54%, 10=21.04%, 20=32.21%, 50=3.22% 00:13:43.621 cpu : usr=11.88%, sys=8.81%, ctx=3126, majf=0, minf=11 00:13:43.621 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=97.8%, >=64=0.0% 00:13:43.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.621 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:13:43.621 issued rwts: total=2683,1452,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.621 latency : target=0, window=0, percentile=100.00%, depth=32 00:13:43.621 job1: (groupid=0, jobs=1): err= 0: pid=87450: Tue Jul 23 22:15:15 2024 00:13:43.621 read: IOPS=1942, BW=243MiB/s (255MB/s)(246MiB/1011msec) 00:13:43.621 slat (nsec): min=8785, max=63053, avg=17080.49, stdev=5873.87 00:13:43.621 clat (usec): min=714, max=14219, avg=2400.32, stdev=2075.60 00:13:43.621 lat (usec): min=746, max=14232, avg=2417.40, stdev=2076.15 00:13:43.621 clat percentiles (usec): 00:13:43.621 | 1.00th=[ 832], 5.00th=[ 947], 10.00th=[ 1004], 20.00th=[ 1074], 00:13:43.621 | 30.00th=[ 1139], 40.00th=[ 1205], 50.00th=[ 1303], 60.00th=[ 1663], 00:13:43.621 | 70.00th=[ 3621], 80.00th=[ 3884], 90.00th=[ 4146], 95.00th=[ 5276], 00:13:43.621 | 99.00th=[13435], 99.50th=[13960], 99.90th=[14222], 99.95th=[14222], 00:13:43.621 | 99.99th=[14222] 00:13:43.621 bw ( KiB/s): min=169472, max=202388, per=31.60%, avg=185930.00, stdev=23275.13, samples=2 00:13:43.621 iops : min= 1324, max= 1581, avg=1452.50, stdev=181.73, samples=2 00:13:43.621 write: IOPS=1518, BW=190MiB/s (199MB/s)(192MiB/1011msec); 0 zone resets 00:13:43.621 slat (usec): min=43, max=166, avg=71.04, stdev=15.73 00:13:43.621 clat (usec): min=9884, max=43136, avg=17849.63, stdev=5573.16 00:13:43.621 lat (usec): min=9977, max=43213, avg=17920.67, stdev=5575.76 00:13:43.621 clat percentiles (usec): 00:13:43.621 | 1.00th=[11076], 5.00th=[13566], 10.00th=[14746], 20.00th=[15401], 00:13:43.621 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16319], 60.00th=[16581], 00:13:43.621 | 70.00th=[16909], 80.00th=[17695], 90.00th=[23725], 95.00th=[32113], 00:13:43.621 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:13:43.621 | 99.99th=[43254] 00:13:43.621 bw ( KiB/s): min=174080, max=211366, per=50.96%, avg=192723.00, stdev=26365.18, samples=2 00:13:43.621 iops : min= 1360, max= 1651, avg=1505.50, stdev=205.77, samples=2 00:13:43.621 lat (usec) : 750=0.06%, 1000=5.40% 00:13:43.621 lat (msec) : 2=29.52%, 4=12.86%, 10=7.29%, 20=39.38%, 50=5.49% 00:13:43.621 cpu : usr=12.18%, sys=6.14%, ctx=2843, majf=0, minf=11 00:13:43.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=99.1%, >=64=0.0% 00:13:43.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:13:43.621 issued rwts: total=1964,1535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.621 latency : target=0, window=0, percentile=100.00%, depth=32 00:13:43.621 00:13:43.621 Run status group 0 (all jobs): 00:13:43.621 READ: bw=575MiB/s (602MB/s), 243MiB/s-332MiB/s (255MB/s-348MB/s), io=581MiB (609MB), run=1011-1011msec 00:13:43.621 WRITE: bw=369MiB/s (387MB/s), 187MiB/s-190MiB/s (196MB/s-199MB/s), io=373MiB (392MB), run=971-1011msec 00:13:43.621 00:13:43.621 Disk stats (read/write): 00:13:43.621 sda: ios=2340/1290, merge=0/0, ticks=5962/21629, in_queue=27592, util=89.70% 00:13:43.621 sdb: ios=1833/1317, merge=0/0, ticks=4384/22913, in_queue=27297, util=90.37% 00:13:43.621 22:15:15 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@101 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 524288 -d 128 -t randrw -r 1 -v 00:13:43.621 [global] 00:13:43.621 thread=1 00:13:43.621 invalidate=1 00:13:43.621 rw=randrw 00:13:43.621 time_based=1 00:13:43.621 runtime=1 00:13:43.621 ioengine=libaio 00:13:43.621 direct=1 00:13:43.621 bs=524288 00:13:43.621 iodepth=128 00:13:43.621 norandommap=0 00:13:43.621 numjobs=1 00:13:43.621 00:13:43.621 verify_dump=1 00:13:43.621 verify_backlog=512 00:13:43.621 verify_state_save=0 00:13:43.621 do_verify=1 00:13:43.621 verify=crc32c-intel 00:13:43.621 [job0] 00:13:43.621 filename=/dev/sda 00:13:43.881 [job1] 00:13:43.881 filename=/dev/sdb 00:13:43.881 queue_depth set to 113 (sda) 00:13:43.881 queue_depth set to 113 (sdb) 00:13:43.881 job0: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:13:43.881 job1: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:13:43.881 fio-3.35 00:13:43.881 Starting 2 threads 00:13:43.881 [2024-07-23 22:15:15.965439] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:43.881 [2024-07-23 22:15:15.968770] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:45.315 [2024-07-23 22:15:17.041254] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:45.315 [2024-07-23 22:15:17.413276] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:45.315 00:13:45.315 job0: (groupid=0, jobs=1): err= 0: pid=87520: Tue Jul 23 22:15:17 2024 00:13:45.315 read: IOPS=286, BW=143MiB/s (150MB/s)(183MiB/1273msec) 00:13:45.315 slat (usec): min=23, max=23279, avg=1274.09, stdev=2484.37 00:13:45.315 clat (msec): min=70, max=379, avg=244.69, stdev=76.40 00:13:45.315 lat (msec): min=72, max=379, avg=245.96, stdev=76.11 00:13:45.315 clat percentiles (msec): 00:13:45.315 | 1.00th=[ 78], 5.00th=[ 140], 10.00th=[ 180], 20.00th=[ 197], 00:13:45.315 | 30.00th=[ 203], 40.00th=[ 211], 50.00th=[ 218], 60.00th=[ 222], 00:13:45.315 | 70.00th=[ 271], 80.00th=[ 326], 90.00th=[ 376], 95.00th=[ 380], 00:13:45.315 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:13:45.315 | 99.99th=[ 380] 00:13:45.315 bw ( KiB/s): min=106496, max=141312, per=22.64%, avg=123904.00, stdev=24618.63, samples=2 00:13:45.315 iops : min= 208, max= 276, avg=242.00, stdev=48.08, samples=2 00:13:45.315 write: IOPS=308, BW=154MiB/s (162MB/s)(135MiB/874msec); 0 zone resets 00:13:45.315 slat (usec): min=167, max=16396, avg=1717.21, stdev=2068.49 00:13:45.315 clat (msec): min=67, max=298, avg=211.03, stdev=40.41 00:13:45.315 lat (msec): min=70, max=298, avg=212.75, stdev=40.60 00:13:45.315 clat percentiles (msec): 00:13:45.315 | 1.00th=[ 84], 5.00th=[ 111], 10.00th=[ 148], 20.00th=[ 192], 00:13:45.315 | 30.00th=[ 213], 40.00th=[ 220], 50.00th=[ 224], 60.00th=[ 226], 00:13:45.315 | 70.00th=[ 230], 80.00th=[ 232], 90.00th=[ 239], 95.00th=[ 259], 00:13:45.315 | 99.00th=[ 296], 99.50th=[ 296], 99.90th=[ 300], 99.95th=[ 300], 00:13:45.315 | 99.99th=[ 300] 00:13:45.315 bw ( KiB/s): min=109568, max=166912, per=35.10%, avg=138240.00, stdev=40548.33, samples=2 00:13:45.315 iops : min= 214, max= 326, avg=270.00, stdev=79.20, samples=2 00:13:45.315 lat (msec) : 100=3.15%, 250=73.86%, 500=22.99% 00:13:45.315 cpu : usr=8.41%, sys=2.12%, ctx=411, majf=0, minf=5 00:13:45.315 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.0%, 32=10.1%, >=64=80.2% 00:13:45.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.315 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:13:45.315 issued rwts: total=365,270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:45.315 job1: (groupid=0, jobs=1): err= 0: pid=87523: Tue Jul 23 22:15:17 2024 00:13:45.315 read: IOPS=949, BW=475MiB/s (498MB/s)(498MiB/1049msec) 00:13:45.315 slat (usec): min=20, max=10721, avg=565.89, stdev=1221.69 00:13:45.315 clat (msec): min=35, max=105, avg=76.32, stdev=12.29 00:13:45.315 lat (msec): min=39, max=105, avg=76.88, stdev=12.39 00:13:45.315 clat percentiles (msec): 00:13:45.315 | 1.00th=[ 50], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 66], 00:13:45.315 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 82], 00:13:45.315 | 70.00th=[ 85], 80.00th=[ 89], 90.00th=[ 93], 95.00th=[ 97], 00:13:45.315 | 99.00th=[ 102], 99.50th=[ 103], 99.90th=[ 106], 99.95th=[ 106], 00:13:45.315 | 99.99th=[ 106] 00:13:45.315 bw ( KiB/s): min=175104, max=292279, per=42.69%, avg=233691.50, stdev=82855.24, samples=2 00:13:45.315 iops : min= 342, max= 570, avg=456.00, stdev=161.22, samples=2 00:13:45.315 write: IOPS=511, BW=256MiB/s (268MB/s)(269MiB/1049msec); 0 zone resets 00:13:45.315 slat (usec): min=146, max=9723, avg=689.34, stdev=1149.75 00:13:45.315 clat (msec): min=40, max=118, avg=98.42, stdev=13.10 00:13:45.315 lat (msec): min=40, max=118, avg=99.11, stdev=13.22 00:13:45.315 clat percentiles (msec): 00:13:45.315 | 1.00th=[ 45], 5.00th=[ 63], 10.00th=[ 90], 20.00th=[ 95], 00:13:45.315 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 102], 60.00th=[ 103], 00:13:45.315 | 70.00th=[ 105], 80.00th=[ 107], 90.00th=[ 110], 95.00th=[ 112], 00:13:45.315 | 99.00th=[ 115], 99.50th=[ 116], 99.90th=[ 120], 99.95th=[ 120], 00:13:45.315 | 99.99th=[ 120] 00:13:45.315 bw ( KiB/s): min=211968, max=311696, per=66.47%, avg=261832.00, stdev=70518.35, samples=2 00:13:45.315 iops : min= 414, max= 608, avg=511.00, stdev=137.18, samples=2 00:13:45.315 lat (msec) : 50=1.44%, 100=77.17%, 250=21.40% 00:13:45.315 cpu : usr=18.80%, sys=5.25%, ctx=377, majf=0, minf=13 00:13:45.315 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:13:45.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.315 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:45.315 issued rwts: total=996,537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:45.315 00:13:45.315 Run status group 0 (all jobs): 00:13:45.315 READ: bw=535MiB/s (561MB/s), 143MiB/s-475MiB/s (150MB/s-498MB/s), io=681MiB (714MB), run=1049-1273msec 00:13:45.315 WRITE: bw=385MiB/s (403MB/s), 154MiB/s-256MiB/s (162MB/s-268MB/s), io=404MiB (423MB), run=874-1049msec 00:13:45.315 00:13:45.315 Disk stats (read/write): 00:13:45.315 sda: ios=414/270, merge=0/0, ticks=28827/26432, in_queue=55258, util=80.32% 00:13:45.315 sdb: ios=666/512, merge=0/0, ticks=19413/23801, in_queue=43214, util=82.91% 00:13:45.315 22:15:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 1024 -t read -r 1 -n 4 00:13:45.574 [global] 00:13:45.574 thread=1 00:13:45.574 invalidate=1 00:13:45.574 rw=read 00:13:45.574 time_based=1 00:13:45.574 runtime=1 00:13:45.574 ioengine=libaio 00:13:45.574 direct=1 00:13:45.574 bs=1048576 00:13:45.574 iodepth=1024 00:13:45.574 norandommap=1 00:13:45.574 numjobs=4 00:13:45.574 00:13:45.574 [job0] 00:13:45.574 filename=/dev/sda 00:13:45.574 [job1] 00:13:45.574 filename=/dev/sdb 00:13:45.574 queue_depth set to 113 (sda) 00:13:45.574 queue_depth set to 113 (sdb) 00:13:45.574 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:13:45.574 ... 00:13:45.574 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:13:45.574 ... 00:13:45.574 fio-3.35 00:13:45.574 Starting 8 threads 00:13:47.479 00:13:47.479 job0: (groupid=0, jobs=1): err= 0: pid=87594: Tue Jul 23 22:15:19 2024 00:13:47.479 read: IOPS=26, BW=26.9MiB/s (28.2MB/s)(47.0MiB/1749msec) 00:13:47.479 slat (usec): min=597, max=621671, avg=22018.03, stdev=94587.81 00:13:47.479 clat (msec): min=713, max=1747, avg=1506.65, stdev=197.74 00:13:47.479 lat (msec): min=1335, max=1748, avg=1528.67, stdev=161.90 00:13:47.479 clat percentiles (msec): 00:13:47.479 | 1.00th=[ 718], 5.00th=[ 1334], 10.00th=[ 1351], 20.00th=[ 1368], 00:13:47.479 | 30.00th=[ 1385], 40.00th=[ 1401], 50.00th=[ 1435], 60.00th=[ 1653], 00:13:47.479 | 70.00th=[ 1670], 80.00th=[ 1703], 90.00th=[ 1737], 95.00th=[ 1737], 00:13:47.479 | 99.00th=[ 1754], 99.50th=[ 1754], 99.90th=[ 1754], 99.95th=[ 1754], 00:13:47.479 | 99.99th=[ 1754] 00:13:47.479 lat (msec) : 750=2.13%, 2000=97.87% 00:13:47.479 cpu : usr=0.00%, sys=2.29%, ctx=54, majf=0, minf=12033 00:13:47.479 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:13:47.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.479 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:13:47.479 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.479 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:47.479 job0: (groupid=0, jobs=1): err= 0: pid=87595: Tue Jul 23 22:15:19 2024 00:13:47.479 read: IOPS=24, BW=24.9MiB/s (26.1MB/s)(43.0MiB/1730msec) 00:13:47.479 slat (usec): min=517, max=640228, avg=24490.53, stdev=101302.18 00:13:47.479 clat (msec): min=676, max=1728, avg=1587.35, stdev=205.63 00:13:47.479 lat (msec): min=1316, max=1729, avg=1611.84, stdev=149.61 00:13:47.479 clat percentiles (msec): 00:13:47.479 | 1.00th=[ 676], 5.00th=[ 1318], 10.00th=[ 1334], 20.00th=[ 1401], 00:13:47.479 | 30.00th=[ 1620], 40.00th=[ 1653], 50.00th=[ 1687], 60.00th=[ 1703], 00:13:47.479 | 70.00th=[ 1720], 80.00th=[ 1720], 90.00th=[ 1720], 95.00th=[ 1720], 00:13:47.479 | 99.00th=[ 1737], 99.50th=[ 1737], 99.90th=[ 1737], 99.95th=[ 1737], 00:13:47.480 | 99.99th=[ 1737] 00:13:47.480 lat (msec) : 750=2.33%, 2000=97.67% 00:13:47.480 cpu : usr=0.00%, sys=2.02%, ctx=52, majf=0, minf=11009 00:13:47.480 IO depths : 1=2.3%, 2=4.7%, 4=9.3%, 8=18.6%, 16=37.2%, 32=27.9%, >=64=0.0% 00:13:47.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.480 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:13:47.480 issued rwts: total=43,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.480 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:47.480 job0: (groupid=0, jobs=1): err= 0: pid=87596: Tue Jul 23 22:15:19 2024 00:13:47.480 read: IOPS=37, BW=37.1MiB/s (38.9MB/s)(65.0MiB/1754msec) 00:13:47.480 slat (usec): min=515, max=638400, avg=16455.65, stdev=82842.39 00:13:47.480 clat (msec): min=683, max=1752, avg=1608.29, stdev=196.60 00:13:47.480 lat (msec): min=1321, max=1753, avg=1624.74, stdev=159.19 00:13:47.480 clat percentiles (msec): 00:13:47.480 | 1.00th=[ 684], 5.00th=[ 1318], 10.00th=[ 1351], 20.00th=[ 1385], 00:13:47.480 | 30.00th=[ 1636], 40.00th=[ 1670], 50.00th=[ 1720], 60.00th=[ 1720], 00:13:47.480 | 70.00th=[ 1737], 80.00th=[ 1737], 90.00th=[ 1754], 95.00th=[ 1754], 00:13:47.480 | 99.00th=[ 1754], 99.50th=[ 1754], 99.90th=[ 1754], 99.95th=[ 1754], 00:13:47.480 | 99.99th=[ 1754] 00:13:47.480 lat (msec) : 750=1.54%, 2000=98.46% 00:13:47.480 cpu : usr=0.00%, sys=2.68%, ctx=100, majf=0, minf=16641 00:13:47.480 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:13:47.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.480 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:13:47.480 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.480 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:47.480 job0: (groupid=0, jobs=1): err= 0: pid=87597: Tue Jul 23 22:15:19 2024 00:13:47.480 read: IOPS=34, BW=34.2MiB/s (35.9MB/s)(60.0MiB/1754msec) 00:13:47.480 slat (usec): min=516, max=624471, avg=17597.52, stdev=84387.66 00:13:47.480 clat (msec): min=697, max=1752, avg=1575.27, stdev=209.20 00:13:47.480 lat (msec): min=1321, max=1753, avg=1592.87, stdev=175.82 00:13:47.480 clat percentiles (msec): 00:13:47.480 | 1.00th=[ 701], 5.00th=[ 1318], 10.00th=[ 1318], 20.00th=[ 1368], 00:13:47.480 | 30.00th=[ 1401], 40.00th=[ 1636], 50.00th=[ 1687], 60.00th=[ 1737], 00:13:47.480 | 70.00th=[ 1737], 80.00th=[ 1737], 90.00th=[ 1754], 95.00th=[ 1754], 00:13:47.480 | 99.00th=[ 1754], 99.50th=[ 1754], 99.90th=[ 1754], 99.95th=[ 1754], 00:13:47.480 | 99.99th=[ 1754] 00:13:47.480 lat (msec) : 750=1.67%, 2000=98.33% 00:13:47.480 cpu : usr=0.06%, sys=2.51%, ctx=59, majf=0, minf=15361 00:13:47.480 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:13:47.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.480 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:13:47.480 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.480 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:47.480 job1: (groupid=0, jobs=1): err= 0: pid=87598: Tue Jul 23 22:15:19 2024 00:13:47.480 read: IOPS=27, BW=27.9MiB/s (29.2MB/s)(49.0MiB/1759msec) 00:13:47.480 slat (usec): min=502, max=628374, avg=21285.27, stdev=93627.90 00:13:47.480 clat (msec): min=714, max=1755, avg=1624.34, stdev=200.24 00:13:47.480 lat (msec): min=1343, max=1757, avg=1645.63, stdev=150.91 00:13:47.480 clat percentiles (msec): 00:13:47.480 | 1.00th=[ 718], 5.00th=[ 1351], 10.00th=[ 1368], 20.00th=[ 1401], 00:13:47.480 | 30.00th=[ 1653], 40.00th=[ 1703], 50.00th=[ 1737], 60.00th=[ 1737], 00:13:47.480 | 70.00th=[ 1754], 80.00th=[ 1754], 90.00th=[ 1754], 95.00th=[ 1754], 00:13:47.480 | 99.00th=[ 1754], 99.50th=[ 1754], 99.90th=[ 1754], 99.95th=[ 1754], 00:13:47.480 | 99.99th=[ 1754] 00:13:47.480 lat (msec) : 750=2.04%, 2000=97.96% 00:13:47.480 cpu : usr=0.00%, sys=2.39%, ctx=71, majf=0, minf=12545 00:13:47.480 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:13:47.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.480 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:13:47.480 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.480 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:47.480 job1: (groupid=0, jobs=1): err= 0: pid=87599: Tue Jul 23 22:15:19 2024 00:13:47.480 read: IOPS=32, BW=32.4MiB/s (34.0MB/s)(57.0MiB/1759msec) 00:13:47.480 slat (usec): min=500, max=635124, avg=18448.28, stdev=87638.83 00:13:47.480 clat (msec): min=707, max=1756, avg=1634.02, stdev=193.15 00:13:47.480 lat (msec): min=1342, max=1758, avg=1652.47, stdev=147.97 00:13:47.480 clat percentiles (msec): 00:13:47.480 | 1.00th=[ 709], 5.00th=[ 1351], 10.00th=[ 1368], 20.00th=[ 1418], 00:13:47.480 | 30.00th=[ 1670], 40.00th=[ 1720], 50.00th=[ 1737], 60.00th=[ 1737], 00:13:47.480 | 70.00th=[ 1737], 80.00th=[ 1754], 90.00th=[ 1754], 95.00th=[ 1754], 00:13:47.480 | 99.00th=[ 1754], 99.50th=[ 1754], 99.90th=[ 1754], 99.95th=[ 1754], 00:13:47.480 | 99.99th=[ 1754] 00:13:47.480 lat (msec) : 750=1.75%, 2000=98.25% 00:13:47.480 cpu : usr=0.00%, sys=2.28%, ctx=57, majf=0, minf=14593 00:13:47.480 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:13:47.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.480 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:13:47.480 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.480 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:47.480 job1: (groupid=0, jobs=1): err= 0: pid=87600: Tue Jul 23 22:15:19 2024 00:13:47.480 read: IOPS=9, BW=9.96MiB/s (10.4MB/s)(17.0MiB/1706msec) 00:13:47.480 slat (usec): min=647, max=627921, avg=60465.99, stdev=155188.62 00:13:47.480 clat (msec): min=677, max=1704, avg=1451.32, stdev=254.53 00:13:47.480 lat (msec): min=1305, max=1705, avg=1511.78, stdev=165.85 00:13:47.480 clat percentiles (msec): 00:13:47.480 | 1.00th=[ 676], 5.00th=[ 676], 10.00th=[ 1301], 20.00th=[ 1334], 00:13:47.480 | 30.00th=[ 1351], 40.00th=[ 1385], 50.00th=[ 1401], 60.00th=[ 1620], 00:13:47.480 | 70.00th=[ 1636], 80.00th=[ 1687], 90.00th=[ 1703], 95.00th=[ 1703], 00:13:47.480 | 99.00th=[ 1703], 99.50th=[ 1703], 99.90th=[ 1703], 99.95th=[ 1703], 00:13:47.480 | 99.99th=[ 1703] 00:13:47.480 lat (msec) : 750=5.88%, 2000=94.12% 00:13:47.480 cpu : usr=0.06%, sys=0.88%, ctx=34, majf=0, minf=4353 00:13:47.480 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:13:47.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.480 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:13:47.480 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.480 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:47.480 job1: (groupid=0, jobs=1): err= 0: pid=87601: Tue Jul 23 22:15:19 2024 00:13:47.480 read: IOPS=16, BW=16.9MiB/s (17.7MB/s)(29.0MiB/1716msec) 00:13:47.480 slat (usec): min=798, max=627862, avg=35811.11, stdev=120610.72 00:13:47.480 clat (msec): min=676, max=1713, avg=1529.13, stdev=232.10 00:13:47.480 lat (msec): min=1304, max=1715, avg=1564.94, stdev=166.83 00:13:47.480 clat percentiles (msec): 00:13:47.480 | 1.00th=[ 676], 5.00th=[ 1301], 10.00th=[ 1318], 20.00th=[ 1334], 00:13:47.480 | 30.00th=[ 1368], 40.00th=[ 1620], 50.00th=[ 1636], 60.00th=[ 1670], 00:13:47.480 | 70.00th=[ 1703], 80.00th=[ 1703], 90.00th=[ 1720], 95.00th=[ 1720], 00:13:47.480 | 99.00th=[ 1720], 99.50th=[ 1720], 99.90th=[ 1720], 99.95th=[ 1720], 00:13:47.480 | 99.99th=[ 1720] 00:13:47.480 lat (msec) : 750=3.45%, 2000=96.55% 00:13:47.480 cpu : usr=0.06%, sys=1.69%, ctx=56, majf=0, minf=7425 00:13:47.480 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:13:47.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.480 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:13:47.480 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.480 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:47.480 00:13:47.480 Run status group 0 (all jobs): 00:13:47.480 READ: bw=209MiB/s (219MB/s), 9.96MiB/s-37.1MiB/s (10.4MB/s-38.9MB/s), io=367MiB (385MB), run=1706-1759msec 00:13:47.480 00:13:47.480 Disk stats (read/write): 00:13:47.480 sda: ios=159/0, merge=0/0, ticks=24709/0, in_queue=24709, util=93.66% 00:13:47.480 sdb: ios=104/0, merge=0/0, ticks=25764/0, in_queue=25764, util=92.83% 00:13:47.739 22:15:19 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@104 -- # '[' 1 -eq 1 ']' 00:13:47.739 22:15:19 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t write -r 300 -v 00:13:47.739 [global] 00:13:47.739 thread=1 00:13:47.739 invalidate=1 00:13:47.739 rw=write 00:13:47.739 time_based=1 00:13:47.739 runtime=300 00:13:47.739 ioengine=libaio 00:13:47.739 direct=1 00:13:47.739 bs=4096 00:13:47.739 iodepth=1 00:13:47.739 norandommap=0 00:13:47.739 numjobs=1 00:13:47.739 00:13:47.739 verify_dump=1 00:13:47.739 verify_backlog=512 00:13:47.739 verify_state_save=0 00:13:47.739 do_verify=1 00:13:47.739 verify=crc32c-intel 00:13:47.739 [job0] 00:13:47.739 filename=/dev/sda 00:13:47.739 [job1] 00:13:47.739 filename=/dev/sdb 00:13:47.739 queue_depth set to 113 (sda) 00:13:47.739 queue_depth set to 113 (sdb) 00:13:47.739 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.739 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.739 fio-3.35 00:13:47.739 Starting 2 threads 00:13:47.739 [2024-07-23 22:15:19.880507] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:47.739 [2024-07-23 22:15:19.884378] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:54.309 [2024-07-23 22:15:25.363745] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:59.585 [2024-07-23 22:15:30.863452] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:04.861 [2024-07-23 22:15:36.483152] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:10.159 [2024-07-23 22:15:42.070001] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:16.725 [2024-07-23 22:15:47.662459] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:21.997 [2024-07-23 22:15:53.290142] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:27.294 [2024-07-23 22:15:58.888392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:32.568 [2024-07-23 22:16:04.505519] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:32.568 [2024-07-23 22:16:04.516897] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:39.132 [2024-07-23 22:16:10.109419] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:44.406 [2024-07-23 22:16:15.717567] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:49.681 [2024-07-23 22:16:21.325881] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:54.952 [2024-07-23 22:16:26.940474] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:01.521 [2024-07-23 22:16:32.521050] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:05.712 [2024-07-23 22:16:37.828066] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:10.989 [2024-07-23 22:16:43.144352] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:16.272 [2024-07-23 22:16:48.450525] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:16.571 [2024-07-23 22:16:48.493242] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:21.844 [2024-07-23 22:16:53.750600] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:27.118 [2024-07-23 22:16:58.944714] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:32.385 [2024-07-23 22:17:04.276080] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:37.692 [2024-07-23 22:17:09.593346] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:42.963 [2024-07-23 22:17:14.883992] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:48.237 [2024-07-23 22:17:20.060247] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:53.507 [2024-07-23 22:17:25.276066] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:58.798 [2024-07-23 22:17:30.604712] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:58.798 [2024-07-23 22:17:30.622086] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:04.063 [2024-07-23 22:17:36.123068] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:10.634 [2024-07-23 22:17:41.781636] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:15.903 [2024-07-23 22:17:47.314356] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:21.171 [2024-07-23 22:17:52.861710] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:26.466 [2024-07-23 22:17:58.422335] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:33.032 [2024-07-23 22:18:04.155042] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:38.305 [2024-07-23 22:18:09.864046] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:43.585 [2024-07-23 22:18:15.611738] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:43.585 [2024-07-23 22:18:15.631413] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:50.154 [2024-07-23 22:18:21.364036] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.426 [2024-07-23 22:18:26.831085] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:00.703 [2024-07-23 22:18:32.568493] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:07.270 [2024-07-23 22:18:38.316746] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:12.609 [2024-07-23 22:18:44.055052] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:17.887 [2024-07-23 22:18:49.776966] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:24.452 [2024-07-23 22:18:55.495096] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:29.726 [2024-07-23 22:19:01.242644] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:29.726 [2024-07-23 22:19:01.255186] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:35.002 [2024-07-23 22:19:06.820031] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:40.341 [2024-07-23 22:19:12.391766] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:46.912 [2024-07-23 22:19:18.009048] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:52.191 [2024-07-23 22:19:23.616883] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:57.470 [2024-07-23 22:19:29.253650] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:02.792 [2024-07-23 22:19:34.757932] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:09.362 [2024-07-23 22:19:40.295098] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:14.636 [2024-07-23 22:19:45.766874] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:14.636 [2024-07-23 22:19:45.798354] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:19.910 [2024-07-23 22:19:51.265187] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:25.244 [2024-07-23 22:19:56.728477] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:30.514 [2024-07-23 22:20:02.197670] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:35.789 [2024-07-23 22:20:07.619727] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:41.064 [2024-07-23 22:20:13.139878] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:47.632 [2024-07-23 22:20:18.655682] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:47.890 [2024-07-23 22:20:19.999217] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:47.890 [2024-07-23 22:20:20.003023] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:47.890 00:18:47.890 job0: (groupid=0, jobs=1): err= 0: pid=87652: Tue Jul 23 22:20:20 2024 00:18:47.890 read: IOPS=5924, BW=23.1MiB/s (24.3MB/s)(6942MiB/299996msec) 00:18:47.890 slat (usec): min=2, max=678, avg= 5.39, stdev= 2.68 00:18:47.890 clat (nsec): min=1060, max=3588.6k, avg=77999.08, stdev=11873.34 00:18:47.890 lat (usec): min=51, max=3593, avg=83.39, stdev=11.77 00:18:47.890 clat percentiles (usec): 00:18:47.890 | 1.00th=[ 62], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 74], 00:18:47.890 | 30.00th=[ 75], 40.00th=[ 76], 50.00th=[ 77], 60.00th=[ 78], 00:18:47.890 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 87], 95.00th=[ 91], 00:18:47.890 | 99.00th=[ 102], 99.50th=[ 109], 99.90th=[ 129], 99.95th=[ 161], 00:18:47.890 | 99.99th=[ 375] 00:18:47.890 write: IOPS=5925, BW=23.1MiB/s (24.3MB/s)(6944MiB/299996msec); 0 zone resets 00:18:47.890 slat (usec): min=3, max=391, avg= 6.39, stdev= 2.61 00:18:47.890 clat (nsec): min=1040, max=3798.8k, avg=77481.10, stdev=13973.71 00:18:47.890 lat (usec): min=51, max=3805, avg=83.87, stdev=13.97 00:18:47.890 clat percentiles (usec): 00:18:47.890 | 1.00th=[ 54], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 72], 00:18:47.890 | 30.00th=[ 75], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 79], 00:18:47.890 | 70.00th=[ 80], 80.00th=[ 82], 90.00th=[ 87], 95.00th=[ 91], 00:18:47.890 | 99.00th=[ 102], 99.50th=[ 109], 99.90th=[ 139], 99.95th=[ 217], 00:18:47.890 | 99.99th=[ 490] 00:18:47.890 bw ( KiB/s): min=20184, max=26768, per=50.08%, avg=23738.90, stdev=1133.21, samples=599 00:18:47.890 iops : min= 5046, max= 6692, avg=5934.68, stdev=283.29, samples=599 00:18:47.890 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.45% 00:18:47.890 lat (usec) : 100=98.23%, 250=1.28%, 500=0.02%, 750=0.01%, 1000=0.01% 00:18:47.890 lat (msec) : 2=0.01%, 4=0.01% 00:18:47.890 cpu : usr=3.37%, sys=7.71%, ctx=3583549, majf=0, minf=2 00:18:47.890 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:47.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.891 issued rwts: total=1777216,1777664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:47.891 job1: (groupid=0, jobs=1): err= 0: pid=87655: Tue Jul 23 22:20:20 2024 00:18:47.891 read: IOPS=5924, BW=23.1MiB/s (24.3MB/s)(6943MiB/300000msec) 00:18:47.891 slat (usec): min=2, max=1200, avg= 3.48, stdev= 2.38 00:18:47.891 clat (nsec): min=1425, max=3533.4k, avg=77748.75, stdev=13260.09 00:18:47.891 lat (usec): min=41, max=3540, avg=81.23, stdev=13.32 00:18:47.891 clat percentiles (usec): 00:18:47.891 | 1.00th=[ 65], 5.00th=[ 70], 10.00th=[ 71], 20.00th=[ 74], 00:18:47.891 | 30.00th=[ 75], 40.00th=[ 76], 50.00th=[ 78], 60.00th=[ 79], 00:18:47.891 | 70.00th=[ 80], 80.00th=[ 82], 90.00th=[ 86], 95.00th=[ 90], 00:18:47.891 | 99.00th=[ 101], 99.50th=[ 108], 99.90th=[ 133], 99.95th=[ 182], 00:18:47.891 | 99.99th=[ 469] 00:18:47.891 write: IOPS=5925, BW=23.1MiB/s (24.3MB/s)(6944MiB/300000msec); 0 zone resets 00:18:47.891 slat (usec): min=3, max=548, avg= 4.89, stdev= 2.27 00:18:47.891 clat (nsec): min=1008, max=3710.5k, avg=81228.71, stdev=12544.80 00:18:47.891 lat (usec): min=47, max=3726, avg=86.12, stdev=12.51 00:18:47.891 clat percentiles (usec): 00:18:47.891 | 1.00th=[ 65], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 77], 00:18:47.891 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 82], 00:18:47.891 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 90], 95.00th=[ 95], 00:18:47.891 | 99.00th=[ 108], 99.50th=[ 113], 99.90th=[ 141], 99.95th=[ 202], 00:18:47.891 | 99.99th=[ 420] 00:18:47.891 bw ( KiB/s): min=20168, max=26768, per=50.08%, avg=23740.30, stdev=1121.74, samples=599 00:18:47.891 iops : min= 5042, max= 6692, avg=5935.02, stdev=280.41, samples=599 00:18:47.891 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.11% 00:18:47.891 lat (usec) : 100=98.18%, 250=1.67%, 500=0.02%, 750=0.01%, 1000=0.01% 00:18:47.891 lat (msec) : 2=0.01%, 4=0.01% 00:18:47.891 cpu : usr=3.27%, sys=6.06%, ctx=3600906, majf=0, minf=1 00:18:47.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:47.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.891 issued rwts: total=1777361,1777664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:47.891 00:18:47.891 Run status group 0 (all jobs): 00:18:47.891 READ: bw=46.3MiB/s (48.5MB/s), 23.1MiB/s-23.1MiB/s (24.3MB/s-24.3MB/s), io=13.6GiB (14.6GB), run=299996-300000msec 00:18:47.891 WRITE: bw=46.3MiB/s (48.5MB/s), 23.1MiB/s-23.1MiB/s (24.3MB/s-24.3MB/s), io=13.6GiB (14.6GB), run=299996-300000msec 00:18:47.891 00:18:47.891 Disk stats (read/write): 00:18:47.891 sda: ios=1779280/1777050, merge=0/0, ticks=134043/132973, in_queue=267016, util=100.00% 00:18:47.891 sdb: ios=1777021/1777152, merge=0/0, ticks=124236/131948, in_queue=256184, util=100.00% 00:18:47.891 22:20:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@116 -- # fio_pid=91183 00:18:47.891 22:20:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 128 -t rw -r 10 00:18:47.891 22:20:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@118 -- # sleep 3 00:18:47.891 [global] 00:18:47.891 thread=1 00:18:47.891 invalidate=1 00:18:47.891 rw=rw 00:18:47.891 time_based=1 00:18:47.891 runtime=10 00:18:47.891 ioengine=libaio 00:18:47.891 direct=1 00:18:47.891 bs=1048576 00:18:47.891 iodepth=128 00:18:47.891 norandommap=1 00:18:47.891 numjobs=1 00:18:47.891 00:18:47.891 [job0] 00:18:47.891 filename=/dev/sda 00:18:47.891 [job1] 00:18:47.891 filename=/dev/sdb 00:18:48.148 queue_depth set to 113 (sda) 00:18:48.148 queue_depth set to 113 (sdb) 00:18:48.148 job0: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:48.148 job1: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:48.148 fio-3.35 00:18:48.148 Starting 2 threads 00:18:48.148 [2024-07-23 22:20:20.224995] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:48.148 [2024-07-23 22:20:20.228698] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:51.436 22:20:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:51.436 [2024-07-23 22:20:23.284602] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (raid0) received event(SPDK_BDEV_EVENT_REMOVE) 00:18:51.436 22:20:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:18:51.436 22:20:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: write offset=12582912, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: write offset=13631488, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: write offset=14680064, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: write offset=15728640, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: write offset=16777216, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=119537664, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=120586240, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=104857600, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=123731968, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=124780544, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: write offset=20971520, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: write offset=22020096, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=125829120, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=126877696, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=127926272, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=121634816, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=128974848, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=130023424, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=122683392, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: write offset=17825792, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=131072000, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: write offset=18874368, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=132120576, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: write offset=19922944, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=133169152, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: write offset=23068672, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: write offset=24117248, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=0, buflen=1048576 00:18:51.436 fio: io_u error on file /dev/sda: Input/output error: read offset=1048576, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=2097152, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=25165824, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=26214400, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=3145728, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=4194304, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=27262976, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=28311552, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=29360128, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=5242880, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=30408704, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=31457280, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=6291456, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=7340032, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=8388608, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=9437184, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=10485760, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=32505856, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=33554432, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=34603008, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=11534336, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=12582912, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=13631488, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=35651584, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=14680064, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=15728640, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=16777216, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=36700160, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=17825792, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=22020096, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=18874368, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=23068672, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=37748736, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=40894464, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=19922944, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=41943040, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=38797312, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=39845888, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=20971520, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=24117248, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=25165824, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=42991616, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=44040192, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=45088768, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=46137344, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=26214400, buflen=1048576 00:18:51.437 fio: pid=91210, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=47185920, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=48234496, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=27262976, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=49283072, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=28311552, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=50331648, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=51380224, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=52428800, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=53477376, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=29360128, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=54525952, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=30408704, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=55574528, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=31457280, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=56623104, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=32505856, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=57671680, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=58720256, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=59768832, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=60817408, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=33554432, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=34603008, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=35651584, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=36700160, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=37748736, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=61865984, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=62914560, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=63963136, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=38797312, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=39845888, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=65011712, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=66060288, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=67108864, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=68157440, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=40894464, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=69206016, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=41943040, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=128974848, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=130023424, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=131072000, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=132120576, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=105906176, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=106954752, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=133169152, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=108003328, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=0, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=1048576, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=2097152, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=3145728, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=109051904, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=4194304, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=110100480, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: read offset=111149056, buflen=1048576 00:18:51.437 fio: io_u error on file /dev/sda: Input/output error: write offset=5242880, buflen=1048576 00:18:51.437 22:20:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:18:51.437 22:20:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:51.696 22:20:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:51.954 [2024-07-23 22:20:24.061968] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Malloc2) received event(SPDK_BDEV_EVENT_REMOVE) 00:18:51.954 [2024-07-23 22:20:24.062315] iscsi.c:4336:iscsi_pdu_payload_op_data: *ERROR*: Not found for transfer_tag=131a 00:18:51.954 [2024-07-23 22:20:24.062353] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131a 00:18:52.213 [2024-07-23 22:20:24.370389] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131a 00:18:52.213 [2024-07-23 22:20:24.371421] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131a 00:18:52.213 [2024-07-23 22:20:24.372986] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131a 00:18:52.213 [2024-07-23 22:20:24.373071] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131a 00:18:52.213 [2024-07-23 22:20:24.373122] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131a 00:18:52.213 [2024-07-23 22:20:24.373192] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131a 00:18:52.213 [2024-07-23 22:20:24.373238] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131a 00:18:52.213 [2024-07-23 22:20:24.373303] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131a 00:18:52.214 [2024-07-23 22:20:24.373343] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131a 00:18:52.214 [2024-07-23 22:20:24.387139] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131a 00:18:52.214 [2024-07-23 22:20:24.387223] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.214 [2024-07-23 22:20:24.387269] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.214 [2024-07-23 22:20:24.387328] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.214 [2024-07-23 22:20:24.387376] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.214 22:20:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@131 -- # fio_status=0 00:18:52.214 22:20:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # wait 91183 00:18:52.214 [2024-07-23 22:20:24.394421] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.214 [2024-07-23 22:20:24.395830] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.214 [2024-07-23 22:20:24.397490] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.214 [2024-07-23 22:20:24.398955] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.214 [2024-07-23 22:20:24.400728] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.214 [2024-07-23 22:20:24.402176] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.214 [2024-07-23 22:20:24.403964] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.214 [2024-07-23 22:20:24.405398] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.214 [2024-07-23 22:20:24.406831] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.214 [2024-07-23 22:20:24.408381] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.474 [2024-07-23 22:20:24.410214] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.474 [2024-07-23 22:20:24.411293] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131b 00:18:52.474 [2024-07-23 22:20:24.412950] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.414393] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.415838] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.417806] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.419267] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.421009] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.422458] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.423913] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.425730] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.427199] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.428980] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.430113] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.431820] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.433515] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.434979] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.436775] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131c 00:18:52.474 [2024-07-23 22:20:24.438119] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.439791] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.441292] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.442833] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.444656] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.446129] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.447774] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.449301] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.451004] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.452516] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.454054] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.455764] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.457391] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.459250] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.460815] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 [2024-07-23 22:20:24.462510] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=131d 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=903872512, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=904921088, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=882900992, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=883949568, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=884998144, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=886046720, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=887095296, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=888143872, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=889192448, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=890241024, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=891289600, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=892338176, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=893386752, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=894435328, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=895483904, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=896532480, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=897581056, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=898629632, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=899678208, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=900726784, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=901775360, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=902823936, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=878706688, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=879755264, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=880803840, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=905969664, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=881852416, buflen=1048576 00:18:52.474 fio: io_u error on file /dev/sdb: Input/output error: write offset=907018240, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=908066816, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=837812224, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=909115392, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=910163968, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=838860800, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=911212544, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=912261120, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=913309696, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=914358272, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=839909376, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=840957952, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=915406848, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=916455424, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=917504000, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=918552576, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=842006528, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=919601152, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=843055104, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=844103680, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=920649728, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=845152256, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=921698304, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=922746880, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=923795456, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=924844032, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=846200832, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=925892608, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=926941184, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=927989760, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=847249408, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=929038336, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=848297984, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=849346560, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=930086912, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=931135488, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=932184064, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=933232640, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=850395136, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=934281216, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=851443712, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=852492288, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=935329792, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=853540864, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=854589440, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=936378368, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=855638016, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=856686592, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=857735168, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=858783744, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=937426944, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=859832320, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=938475520, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=860880896, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=939524096, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=861929472, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=940572672, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=862978048, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=941621248, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=942669824, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=864026624, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=865075200, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=943718400, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=944766976, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=945815552, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=946864128, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=947912704, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=866123776, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=867172352, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=868220928, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=948961280, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=869269504, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=870318080, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=871366656, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=950009856, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=951058432, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=952107008, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=872415232, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=953155584, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=873463808, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=954204160, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=955252736, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=956301312, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=957349888, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=874512384, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=875560960, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=958398464, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=959447040, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=876609536, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=960495616, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=961544192, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=877658112, buflen=1048576 00:18:52.475 fio: pid=91213, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=962592768, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=878706688, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=963641344, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=964689920, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=879755264, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=965738496, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: write offset=966787072, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=880803840, buflen=1048576 00:18:52.475 fio: io_u error on file /dev/sdb: Input/output error: read offset=881852416, buflen=1048576 00:18:52.475 00:18:52.475 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=91210: Tue Jul 23 22:20:24 2024 00:18:52.475 read: IOPS=103, BW=81.6MiB/s (85.6MB/s)(235MiB/2879msec) 00:18:52.475 slat (usec): min=36, max=35971, avg=4258.60, stdev=6982.75 00:18:52.475 clat (msec): min=187, max=1186, avg=652.61, stdev=197.51 00:18:52.475 lat (msec): min=187, max=1186, avg=657.36, stdev=197.77 00:18:52.475 clat percentiles (msec): 00:18:52.475 | 1.00th=[ 188], 5.00th=[ 388], 10.00th=[ 477], 20.00th=[ 510], 00:18:52.475 | 30.00th=[ 535], 40.00th=[ 558], 50.00th=[ 609], 60.00th=[ 667], 00:18:52.475 | 70.00th=[ 693], 80.00th=[ 835], 90.00th=[ 911], 95.00th=[ 1083], 00:18:52.475 | 99.00th=[ 1150], 99.50th=[ 1183], 99.90th=[ 1183], 99.95th=[ 1183], 00:18:52.475 | 99.99th=[ 1183] 00:18:52.475 bw ( KiB/s): min=51097, max=139264, per=33.26%, avg=87991.00, stdev=38483.20, samples=4 00:18:52.475 iops : min= 49, max= 136, avg=85.50, stdev=37.83, samples=4 00:18:52.475 write: IOPS=112, BW=89.3MiB/s (93.6MB/s)(257MiB/2879msec); 0 zone resets 00:18:52.475 slat (usec): min=80, max=49867, avg=4919.38, stdev=8600.24 00:18:52.475 clat (msec): min=208, max=1213, avg=696.67, stdev=193.74 00:18:52.475 lat (msec): min=229, max=1222, avg=702.29, stdev=194.14 00:18:52.475 clat percentiles (msec): 00:18:52.475 | 1.00th=[ 257], 5.00th=[ 439], 10.00th=[ 523], 20.00th=[ 558], 00:18:52.475 | 30.00th=[ 592], 40.00th=[ 625], 50.00th=[ 667], 60.00th=[ 701], 00:18:52.476 | 70.00th=[ 726], 80.00th=[ 827], 90.00th=[ 953], 95.00th=[ 1150], 00:18:52.476 | 99.00th=[ 1183], 99.50th=[ 1200], 99.90th=[ 1217], 99.95th=[ 1217], 00:18:52.476 | 99.99th=[ 1217] 00:18:52.476 bw ( KiB/s): min=57229, max=114688, per=32.87%, avg=92083.00, stdev=24562.87, samples=4 00:18:52.476 iops : min= 55, max= 112, avg=89.50, stdev=24.37, samples=4 00:18:52.476 lat (msec) : 250=1.29%, 500=8.87%, 750=49.68%, 1000=13.55%, 2000=5.97% 00:18:52.476 cpu : usr=0.87%, sys=1.56%, ctx=469, majf=0, minf=2 00:18:52.476 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:18:52.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.476 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:52.476 issued rwts: total=297,323,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:52.476 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=91213: Tue Jul 23 22:20:24 2024 00:18:52.476 read: IOPS=210, BW=200MiB/s (209MB/s)(799MiB/4002msec) 00:18:52.476 slat (usec): min=38, max=345561, avg=2434.61, stdev=12885.70 00:18:52.476 clat (msec): min=138, max=535, avg=268.54, stdev=76.07 00:18:52.476 lat (msec): min=138, max=535, avg=270.67, stdev=76.26 00:18:52.476 clat percentiles (msec): 00:18:52.476 | 1.00th=[ 148], 5.00th=[ 157], 10.00th=[ 171], 20.00th=[ 199], 00:18:52.476 | 30.00th=[ 239], 40.00th=[ 251], 50.00th=[ 266], 60.00th=[ 279], 00:18:52.476 | 70.00th=[ 292], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 380], 00:18:52.476 | 99.00th=[ 514], 99.50th=[ 518], 99.90th=[ 535], 99.95th=[ 535], 00:18:52.476 | 99.99th=[ 535] 00:18:52.476 bw ( KiB/s): min=119022, max=352961, per=76.55%, avg=202537.62, stdev=69420.25, samples=8 00:18:52.476 iops : min= 116, max= 344, avg=197.50, stdev=67.68, samples=8 00:18:52.476 write: IOPS=230, BW=209MiB/s (220MB/s)(838MiB/4002msec); 0 zone resets 00:18:52.476 slat (usec): min=77, max=57908, avg=2102.16, stdev=5519.09 00:18:52.476 clat (msec): min=153, max=536, avg=301.94, stdev=69.37 00:18:52.476 lat (msec): min=153, max=555, avg=304.09, stdev=69.79 00:18:52.476 clat percentiles (msec): 00:18:52.476 | 1.00th=[ 157], 5.00th=[ 197], 10.00th=[ 205], 20.00th=[ 232], 00:18:52.476 | 30.00th=[ 271], 40.00th=[ 292], 50.00th=[ 305], 60.00th=[ 321], 00:18:52.476 | 70.00th=[ 342], 80.00th=[ 363], 90.00th=[ 384], 95.00th=[ 401], 00:18:52.476 | 99.00th=[ 518], 99.50th=[ 518], 99.90th=[ 535], 99.95th=[ 535], 00:18:52.476 | 99.99th=[ 535] 00:18:52.476 bw ( KiB/s): min=137490, max=330388, per=76.59%, avg=214588.62, stdev=58092.11, samples=8 00:18:52.476 iops : min= 134, max= 322, avg=209.25, stdev=56.64, samples=8 00:18:52.476 lat (msec) : 250=28.33%, 500=62.66%, 750=1.76% 00:18:52.476 cpu : usr=1.87%, sys=3.22%, ctx=801, majf=0, minf=1 00:18:52.476 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:18:52.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.476 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:52.476 issued rwts: total=842,923,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:52.476 00:18:52.476 Run status group 0 (all jobs): 00:18:52.476 READ: bw=258MiB/s (271MB/s), 81.6MiB/s-200MiB/s (85.6MB/s-209MB/s), io=1034MiB (1084MB), run=2879-4002msec 00:18:52.476 WRITE: bw=274MiB/s (287MB/s), 89.3MiB/s-209MiB/s (93.6MB/s-220MB/s), io=1095MiB (1148MB), run=2879-4002msec 00:18:52.476 00:18:52.476 Disk stats (read/write): 00:18:52.476 sda: ios=275/246, merge=0/0, ticks=64300/80557, in_queue=144856, util=75.63% 00:18:52.476 sdb: ios=885/916, merge=0/0, ticks=85661/131568, in_queue=217228, util=91.89% 00:18:52.476 iscsi hotplug test: fio failed as expected 00:18:52.476 Cleaning up iSCSI connection 00:18:52.476 22:20:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # fio_status=2 00:18:52.476 22:20:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@134 -- # '[' 2 -eq 0 ']' 00:18:52.476 22:20:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@138 -- # echo 'iscsi hotplug test: fio failed as expected' 00:18:52.476 22:20:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@141 -- # iscsicleanup 00:18:52.476 22:20:24 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:18:52.476 22:20:24 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:18:52.476 Logging out of session [sid: 19, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:18:52.476 Logout of [sid: 19, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:18:52.476 22:20:24 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:18:52.476 22:20:24 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@983 -- # rm -rf 00:18:52.476 22:20:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2016-06.io.spdk:Target3 00:18:52.735 22:20:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@144 -- # delete_tmp_files 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@14 -- # rm -f /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/iscsi2.json 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@15 -- # rm -f ./local-job0-0-verify.state 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@16 -- # rm -f ./local-job1-1-verify.state 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@148 -- # killprocess 87245 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@948 -- # '[' -z 87245 ']' 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@952 -- # kill -0 87245 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@953 -- # uname 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87245 00:18:53.011 killing process with pid 87245 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87245' 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@967 -- # kill 87245 00:18:53.011 22:20:24 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@972 -- # wait 87245 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@150 -- # iscsitestfini 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:18:53.284 00:18:53.284 real 5m16.545s 00:18:53.284 user 3m36.176s 00:18:53.284 sys 2m3.522s 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:53.284 ************************************ 00:18:53.284 END TEST iscsi_tgt_fio 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:18:53.284 ************************************ 00:18:53.284 22:20:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@38 -- # run_test iscsi_tgt_qos /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:18:53.284 22:20:25 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:53.284 22:20:25 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.284 22:20:25 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:18:53.284 ************************************ 00:18:53.284 START TEST iscsi_tgt_qos 00:18:53.284 ************************************ 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:18:53.284 * Looking for test storage... 00:18:53.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@11 -- # iscsitestinit 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@44 -- # '[' -z 10.0.0.1 ']' 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@49 -- # '[' -z 10.0.0.2 ']' 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@54 -- # MALLOC_BDEV_SIZE=64 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@55 -- # MALLOC_BLOCK_SIZE=512 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@56 -- # IOPS_RESULT= 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@57 -- # BANDWIDTH_RESULT= 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@58 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@60 -- # timing_enter start_iscsi_tgt 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:53.284 Process pid: 91368 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@63 -- # pid=91368 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@62 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@64 -- # echo 'Process pid: 91368' 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@65 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@66 -- # waitforlisten 91368 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@829 -- # '[' -z 91368 ']' 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.284 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:53.544 [2024-07-23 22:20:25.532360] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:18:53.544 [2024-07-23 22:20:25.532493] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91368 ] 00:18:53.544 [2024-07-23 22:20:25.664728] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:53.544 [2024-07-23 22:20:25.680349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.544 [2024-07-23 22:20:25.728438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.803 [2024-07-23 22:20:25.769318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:53.803 iscsi_tgt is listening. Running tests... 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@862 -- # return 0 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@67 -- # echo 'iscsi_tgt is listening. Running tests...' 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@69 -- # timing_exit start_iscsi_tgt 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@71 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@72 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@73 -- # rpc_cmd bdev_malloc_create 64 512 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.803 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:54.063 Malloc0 00:18:54.063 22:20:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.063 22:20:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@78 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias Malloc0:0 1:2 64 -d 00:18:54.063 22:20:26 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.063 22:20:26 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:54.063 22:20:26 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.063 22:20:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@79 -- # sleep 1 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@81 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:18:55.000 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@82 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:18:55.000 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:18:55.000 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@84 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@87 -- # run_fio Malloc0 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:55.000 [2024-07-23 22:20:27.076628] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:55.000 "tick_rate": 2100000000, 00:18:55.000 "ticks": 2167949304572, 00:18:55.000 "bdevs": [ 00:18:55.000 { 00:18:55.000 "name": "Malloc0", 00:18:55.000 "bytes_read": 41472, 00:18:55.000 "num_read_ops": 4, 00:18:55.000 "bytes_written": 0, 00:18:55.000 "num_write_ops": 0, 00:18:55.000 "bytes_unmapped": 0, 00:18:55.000 "num_unmap_ops": 0, 00:18:55.000 "bytes_copied": 0, 00:18:55.000 "num_copy_ops": 0, 00:18:55.000 "read_latency_ticks": 810492, 00:18:55.000 "max_read_latency_ticks": 298728, 00:18:55.000 "min_read_latency_ticks": 26548, 00:18:55.000 "write_latency_ticks": 0, 00:18:55.000 "max_write_latency_ticks": 0, 00:18:55.000 "min_write_latency_ticks": 0, 00:18:55.000 "unmap_latency_ticks": 0, 00:18:55.000 "max_unmap_latency_ticks": 0, 00:18:55.000 "min_unmap_latency_ticks": 0, 00:18:55.000 "copy_latency_ticks": 0, 00:18:55.000 "max_copy_latency_ticks": 0, 00:18:55.000 "min_copy_latency_ticks": 0, 00:18:55.000 "io_error": {} 00:18:55.000 } 00:18:55.000 ] 00:18:55.000 }' 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=4 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=41472 00:18:55.000 22:20:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:55.259 [global] 00:18:55.259 thread=1 00:18:55.259 invalidate=1 00:18:55.259 rw=randread 00:18:55.259 time_based=1 00:18:55.259 runtime=5 00:18:55.259 ioengine=libaio 00:18:55.259 direct=1 00:18:55.259 bs=1024 00:18:55.259 iodepth=128 00:18:55.259 norandommap=1 00:18:55.259 numjobs=1 00:18:55.259 00:18:55.259 [job0] 00:18:55.259 filename=/dev/sda 00:18:55.259 queue_depth set to 113 (sda) 00:18:55.259 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:55.259 fio-3.35 00:18:55.259 Starting 1 thread 00:19:00.540 00:19:00.540 job0: (groupid=0, jobs=1): err= 0: pid=91443: Tue Jul 23 22:20:32 2024 00:19:00.540 read: IOPS=56.2k, BW=54.9MiB/s (57.6MB/s)(275MiB/5002msec) 00:19:00.540 slat (nsec): min=1937, max=809787, avg=16490.91, stdev=45472.98 00:19:00.540 clat (usec): min=1476, max=4017, avg=2258.56, stdev=73.63 00:19:00.540 lat (usec): min=1660, max=4022, avg=2275.05, stdev=58.55 00:19:00.540 clat percentiles (usec): 00:19:00.540 | 1.00th=[ 2040], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2245], 00:19:00.540 | 30.00th=[ 2245], 40.00th=[ 2245], 50.00th=[ 2245], 60.00th=[ 2278], 00:19:00.540 | 70.00th=[ 2278], 80.00th=[ 2278], 90.00th=[ 2311], 95.00th=[ 2343], 00:19:00.541 | 99.00th=[ 2474], 99.50th=[ 2540], 99.90th=[ 2671], 99.95th=[ 3163], 00:19:00.541 | 99.99th=[ 3687] 00:19:00.541 bw ( KiB/s): min=55962, max=56576, per=100.00%, avg=56330.89, stdev=224.98, samples=9 00:19:00.541 iops : min=55962, max=56576, avg=56330.89, stdev=224.98, samples=9 00:19:00.541 lat (msec) : 2=0.09%, 4=99.91%, 10=0.01% 00:19:00.541 cpu : usr=6.76%, sys=19.22%, ctx=241241, majf=0, minf=32 00:19:00.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:00.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:00.541 issued rwts: total=281329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:00.541 00:19:00.541 Run status group 0 (all jobs): 00:19:00.541 READ: bw=54.9MiB/s (57.6MB/s), 54.9MiB/s-54.9MiB/s (57.6MB/s-57.6MB/s), io=275MiB (288MB), run=5002-5002msec 00:19:00.541 00:19:00.541 Disk stats (read/write): 00:19:00.541 sda: ios=274879/0, merge=0/0, ticks=532962/0, in_queue=532962, util=98.15% 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:19:00.541 "tick_rate": 2100000000, 00:19:00.541 "ticks": 2179429606536, 00:19:00.541 "bdevs": [ 00:19:00.541 { 00:19:00.541 "name": "Malloc0", 00:19:00.541 "bytes_read": 289191424, 00:19:00.541 "num_read_ops": 281386, 00:19:00.541 "bytes_written": 0, 00:19:00.541 "num_write_ops": 0, 00:19:00.541 "bytes_unmapped": 0, 00:19:00.541 "num_unmap_ops": 0, 00:19:00.541 "bytes_copied": 0, 00:19:00.541 "num_copy_ops": 0, 00:19:00.541 "read_latency_ticks": 54906597460, 00:19:00.541 "max_read_latency_ticks": 549770, 00:19:00.541 "min_read_latency_ticks": 10920, 00:19:00.541 "write_latency_ticks": 0, 00:19:00.541 "max_write_latency_ticks": 0, 00:19:00.541 "min_write_latency_ticks": 0, 00:19:00.541 "unmap_latency_ticks": 0, 00:19:00.541 "max_unmap_latency_ticks": 0, 00:19:00.541 "min_unmap_latency_ticks": 0, 00:19:00.541 "copy_latency_ticks": 0, 00:19:00.541 "max_copy_latency_ticks": 0, 00:19:00.541 "min_copy_latency_ticks": 0, 00:19:00.541 "io_error": {} 00:19:00.541 } 00:19:00.541 ] 00:19:00.541 }' 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=281386 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=289191424 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=56276 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=57829990 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@90 -- # IOPS_LIMIT=28138 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@91 -- # BANDWIDTH_LIMIT=28914995 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@94 -- # READ_BANDWIDTH_LIMIT=14457497 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@98 -- # IOPS_LIMIT=28000 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@99 -- # BANDWIDTH_LIMIT_MB=27 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@100 -- # BANDWIDTH_LIMIT=28311552 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@101 -- # READ_BANDWIDTH_LIMIT_MB=13 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@102 -- # READ_BANDWIDTH_LIMIT=13631488 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@105 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 28000 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@106 -- # run_fio Malloc0 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:19:00.541 "tick_rate": 2100000000, 00:19:00.541 "ticks": 2179698561464, 00:19:00.541 "bdevs": [ 00:19:00.541 { 00:19:00.541 "name": "Malloc0", 00:19:00.541 "bytes_read": 289191424, 00:19:00.541 "num_read_ops": 281386, 00:19:00.541 "bytes_written": 0, 00:19:00.541 "num_write_ops": 0, 00:19:00.541 "bytes_unmapped": 0, 00:19:00.541 "num_unmap_ops": 0, 00:19:00.541 "bytes_copied": 0, 00:19:00.541 "num_copy_ops": 0, 00:19:00.541 "read_latency_ticks": 54906597460, 00:19:00.541 "max_read_latency_ticks": 549770, 00:19:00.541 "min_read_latency_ticks": 10920, 00:19:00.541 "write_latency_ticks": 0, 00:19:00.541 "max_write_latency_ticks": 0, 00:19:00.541 "min_write_latency_ticks": 0, 00:19:00.541 "unmap_latency_ticks": 0, 00:19:00.541 "max_unmap_latency_ticks": 0, 00:19:00.541 "min_unmap_latency_ticks": 0, 00:19:00.541 "copy_latency_ticks": 0, 00:19:00.541 "max_copy_latency_ticks": 0, 00:19:00.541 "min_copy_latency_ticks": 0, 00:19:00.541 "io_error": {} 00:19:00.541 } 00:19:00.541 ] 00:19:00.541 }' 00:19:00.541 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:19:00.801 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=281386 00:19:00.801 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:19:00.801 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=289191424 00:19:00.801 22:20:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:19:00.801 [global] 00:19:00.801 thread=1 00:19:00.801 invalidate=1 00:19:00.801 rw=randread 00:19:00.801 time_based=1 00:19:00.801 runtime=5 00:19:00.801 ioengine=libaio 00:19:00.801 direct=1 00:19:00.801 bs=1024 00:19:00.801 iodepth=128 00:19:00.801 norandommap=1 00:19:00.801 numjobs=1 00:19:00.801 00:19:00.801 [job0] 00:19:00.801 filename=/dev/sda 00:19:00.801 queue_depth set to 113 (sda) 00:19:00.801 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:19:00.801 fio-3.35 00:19:00.801 Starting 1 thread 00:19:06.076 00:19:06.076 job0: (groupid=0, jobs=1): err= 0: pid=91531: Tue Jul 23 22:20:38 2024 00:19:06.076 read: IOPS=28.0k, BW=27.3MiB/s (28.7MB/s)(137MiB/5004msec) 00:19:06.076 slat (usec): min=2, max=1443, avg=33.77, stdev=135.66 00:19:06.076 clat (usec): min=1394, max=8472, avg=4536.63, stdev=413.36 00:19:06.076 lat (usec): min=1403, max=8480, avg=4570.40, stdev=411.30 00:19:06.076 clat percentiles (usec): 00:19:06.076 | 1.00th=[ 3687], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4146], 00:19:06.076 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4686], 60.00th=[ 4817], 00:19:06.076 | 70.00th=[ 4883], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5014], 00:19:06.076 | 99.00th=[ 5211], 99.50th=[ 5211], 99.90th=[ 5407], 99.95th=[ 5866], 00:19:06.076 | 99.99th=[ 7635] 00:19:06.076 bw ( KiB/s): min=27972, max=28056, per=100.00%, avg=28040.44, stdev=31.22, samples=9 00:19:06.076 iops : min=27972, max=28056, avg=28040.44, stdev=31.22, samples=9 00:19:06.076 lat (msec) : 2=0.05%, 4=3.59%, 10=96.36% 00:19:06.076 cpu : usr=5.22%, sys=13.35%, ctx=102230, majf=0, minf=32 00:19:06.076 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:06.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.076 issued rwts: total=140098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.076 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.076 00:19:06.076 Run status group 0 (all jobs): 00:19:06.076 READ: bw=27.3MiB/s (28.7MB/s), 27.3MiB/s-27.3MiB/s (28.7MB/s-28.7MB/s), io=137MiB (143MB), run=5004-5004msec 00:19:06.076 00:19:06.076 Disk stats (read/write): 00:19:06.076 sda: ios=136864/0, merge=0/0, ticks=512073/0, in_queue=512073, util=98.15% 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:19:06.076 "tick_rate": 2100000000, 00:19:06.076 "ticks": 2191132923346, 00:19:06.076 "bdevs": [ 00:19:06.076 { 00:19:06.076 "name": "Malloc0", 00:19:06.076 "bytes_read": 432651776, 00:19:06.076 "num_read_ops": 421484, 00:19:06.076 "bytes_written": 0, 00:19:06.076 "num_write_ops": 0, 00:19:06.076 "bytes_unmapped": 0, 00:19:06.076 "num_unmap_ops": 0, 00:19:06.076 "bytes_copied": 0, 00:19:06.076 "num_copy_ops": 0, 00:19:06.076 "read_latency_ticks": 591416363194, 00:19:06.076 "max_read_latency_ticks": 5213364, 00:19:06.076 "min_read_latency_ticks": 10920, 00:19:06.076 "write_latency_ticks": 0, 00:19:06.076 "max_write_latency_ticks": 0, 00:19:06.076 "min_write_latency_ticks": 0, 00:19:06.076 "unmap_latency_ticks": 0, 00:19:06.076 "max_unmap_latency_ticks": 0, 00:19:06.076 "min_unmap_latency_ticks": 0, 00:19:06.076 "copy_latency_ticks": 0, 00:19:06.076 "max_copy_latency_ticks": 0, 00:19:06.076 "min_copy_latency_ticks": 0, 00:19:06.076 "io_error": {} 00:19:06.076 } 00:19:06.076 ] 00:19:06.076 }' 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=421484 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=432651776 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=28019 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=28692070 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@107 -- # verify_qos_limits 28019 28000 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=28019 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=28000 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@110 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@111 -- # run_fio Malloc0 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.076 22:20:38 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:06.336 22:20:38 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.336 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:19:06.336 "tick_rate": 2100000000, 00:19:06.336 "ticks": 2191435535818, 00:19:06.336 "bdevs": [ 00:19:06.336 { 00:19:06.336 "name": "Malloc0", 00:19:06.336 "bytes_read": 432651776, 00:19:06.336 "num_read_ops": 421484, 00:19:06.336 "bytes_written": 0, 00:19:06.336 "num_write_ops": 0, 00:19:06.336 "bytes_unmapped": 0, 00:19:06.336 "num_unmap_ops": 0, 00:19:06.336 "bytes_copied": 0, 00:19:06.336 "num_copy_ops": 0, 00:19:06.336 "read_latency_ticks": 591416363194, 00:19:06.336 "max_read_latency_ticks": 5213364, 00:19:06.336 "min_read_latency_ticks": 10920, 00:19:06.336 "write_latency_ticks": 0, 00:19:06.336 "max_write_latency_ticks": 0, 00:19:06.336 "min_write_latency_ticks": 0, 00:19:06.336 "unmap_latency_ticks": 0, 00:19:06.336 "max_unmap_latency_ticks": 0, 00:19:06.336 "min_unmap_latency_ticks": 0, 00:19:06.336 "copy_latency_ticks": 0, 00:19:06.336 "max_copy_latency_ticks": 0, 00:19:06.336 "min_copy_latency_ticks": 0, 00:19:06.336 "io_error": {} 00:19:06.336 } 00:19:06.336 ] 00:19:06.336 }' 00:19:06.336 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:19:06.336 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=421484 00:19:06.336 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:19:06.336 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=432651776 00:19:06.336 22:20:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:19:06.336 [global] 00:19:06.336 thread=1 00:19:06.336 invalidate=1 00:19:06.336 rw=randread 00:19:06.336 time_based=1 00:19:06.336 runtime=5 00:19:06.336 ioengine=libaio 00:19:06.336 direct=1 00:19:06.336 bs=1024 00:19:06.336 iodepth=128 00:19:06.336 norandommap=1 00:19:06.336 numjobs=1 00:19:06.336 00:19:06.336 [job0] 00:19:06.336 filename=/dev/sda 00:19:06.336 queue_depth set to 113 (sda) 00:19:06.595 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:19:06.595 fio-3.35 00:19:06.595 Starting 1 thread 00:19:11.868 00:19:11.868 job0: (groupid=0, jobs=1): err= 0: pid=91626: Tue Jul 23 22:20:43 2024 00:19:11.868 read: IOPS=56.0k, BW=54.7MiB/s (57.3MB/s)(273MiB/5002msec) 00:19:11.868 slat (nsec): min=1902, max=2889.9k, avg=16541.90, stdev=45994.57 00:19:11.868 clat (usec): min=1058, max=5282, avg=2269.13, stdev=111.44 00:19:11.868 lat (usec): min=1065, max=5288, avg=2285.67, stdev=102.41 00:19:11.868 clat percentiles (usec): 00:19:11.868 | 1.00th=[ 2040], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2245], 00:19:11.868 | 30.00th=[ 2245], 40.00th=[ 2245], 50.00th=[ 2245], 60.00th=[ 2278], 00:19:11.868 | 70.00th=[ 2278], 80.00th=[ 2278], 90.00th=[ 2343], 95.00th=[ 2442], 00:19:11.868 | 99.00th=[ 2540], 99.50th=[ 2671], 99.90th=[ 3064], 99.95th=[ 4015], 00:19:11.868 | 99.99th=[ 5276] 00:19:11.868 bw ( KiB/s): min=55552, max=56432, per=100.00%, avg=56043.11, stdev=278.26, samples=9 00:19:11.868 iops : min=55552, max=56434, avg=56043.56, stdev=278.48, samples=9 00:19:11.868 lat (msec) : 2=0.18%, 4=99.77%, 10=0.05% 00:19:11.868 cpu : usr=7.68%, sys=18.76%, ctx=234437, majf=0, minf=32 00:19:11.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:11.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.868 issued rwts: total=280041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.868 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.868 00:19:11.868 Run status group 0 (all jobs): 00:19:11.868 READ: bw=54.7MiB/s (57.3MB/s), 54.7MiB/s-54.7MiB/s (57.3MB/s-57.3MB/s), io=273MiB (287MB), run=5002-5002msec 00:19:11.868 00:19:11.868 Disk stats (read/write): 00:19:11.868 sda: ios=273655/0, merge=0/0, ticks=531921/0, in_queue=531921, util=98.13% 00:19:11.868 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:11.868 22:20:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.868 22:20:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:11.868 22:20:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.868 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:19:11.868 "tick_rate": 2100000000, 00:19:11.868 "ticks": 2202856484354, 00:19:11.868 "bdevs": [ 00:19:11.868 { 00:19:11.868 "name": "Malloc0", 00:19:11.868 "bytes_read": 719413760, 00:19:11.868 "num_read_ops": 701525, 00:19:11.868 "bytes_written": 0, 00:19:11.868 "num_write_ops": 0, 00:19:11.868 "bytes_unmapped": 0, 00:19:11.868 "num_unmap_ops": 0, 00:19:11.868 "bytes_copied": 0, 00:19:11.868 "num_copy_ops": 0, 00:19:11.868 "read_latency_ticks": 646360568796, 00:19:11.868 "max_read_latency_ticks": 5213364, 00:19:11.868 "min_read_latency_ticks": 10920, 00:19:11.868 "write_latency_ticks": 0, 00:19:11.868 "max_write_latency_ticks": 0, 00:19:11.868 "min_write_latency_ticks": 0, 00:19:11.868 "unmap_latency_ticks": 0, 00:19:11.868 "max_unmap_latency_ticks": 0, 00:19:11.868 "min_unmap_latency_ticks": 0, 00:19:11.868 "copy_latency_ticks": 0, 00:19:11.868 "max_copy_latency_ticks": 0, 00:19:11.868 "min_copy_latency_ticks": 0, 00:19:11.868 "io_error": {} 00:19:11.868 } 00:19:11.868 ] 00:19:11.868 }' 00:19:11.868 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:19:11.868 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=701525 00:19:11.868 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:19:11.868 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=719413760 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=56008 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=57352396 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@112 -- # '[' 56008 -gt 28000 ']' 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@115 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 28000 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@116 -- # run_fio Malloc0 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:19:11.869 "tick_rate": 2100000000, 00:19:11.869 "ticks": 2203117030648, 00:19:11.869 "bdevs": [ 00:19:11.869 { 00:19:11.869 "name": "Malloc0", 00:19:11.869 "bytes_read": 719413760, 00:19:11.869 "num_read_ops": 701525, 00:19:11.869 "bytes_written": 0, 00:19:11.869 "num_write_ops": 0, 00:19:11.869 "bytes_unmapped": 0, 00:19:11.869 "num_unmap_ops": 0, 00:19:11.869 "bytes_copied": 0, 00:19:11.869 "num_copy_ops": 0, 00:19:11.869 "read_latency_ticks": 646360568796, 00:19:11.869 "max_read_latency_ticks": 5213364, 00:19:11.869 "min_read_latency_ticks": 10920, 00:19:11.869 "write_latency_ticks": 0, 00:19:11.869 "max_write_latency_ticks": 0, 00:19:11.869 "min_write_latency_ticks": 0, 00:19:11.869 "unmap_latency_ticks": 0, 00:19:11.869 "max_unmap_latency_ticks": 0, 00:19:11.869 "min_unmap_latency_ticks": 0, 00:19:11.869 "copy_latency_ticks": 0, 00:19:11.869 "max_copy_latency_ticks": 0, 00:19:11.869 "min_copy_latency_ticks": 0, 00:19:11.869 "io_error": {} 00:19:11.869 } 00:19:11.869 ] 00:19:11.869 }' 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=701525 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=719413760 00:19:11.869 22:20:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:19:11.869 [global] 00:19:11.869 thread=1 00:19:11.869 invalidate=1 00:19:11.869 rw=randread 00:19:11.869 time_based=1 00:19:11.869 runtime=5 00:19:11.869 ioengine=libaio 00:19:11.869 direct=1 00:19:11.869 bs=1024 00:19:11.869 iodepth=128 00:19:11.869 norandommap=1 00:19:11.869 numjobs=1 00:19:11.869 00:19:11.869 [job0] 00:19:11.869 filename=/dev/sda 00:19:11.869 queue_depth set to 113 (sda) 00:19:12.128 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:19:12.128 fio-3.35 00:19:12.128 Starting 1 thread 00:19:17.435 00:19:17.435 job0: (groupid=0, jobs=1): err= 0: pid=91710: Tue Jul 23 22:20:49 2024 00:19:17.435 read: IOPS=28.0k, BW=27.3MiB/s (28.7MB/s)(137MiB/5005msec) 00:19:17.435 slat (usec): min=3, max=1547, avg=33.37, stdev=137.29 00:19:17.435 clat (usec): min=1574, max=8687, avg=4536.84, stdev=400.56 00:19:17.435 lat (usec): min=1584, max=8691, avg=4570.21, stdev=398.56 00:19:17.435 clat percentiles (usec): 00:19:17.435 | 1.00th=[ 3851], 5.00th=[ 4080], 10.00th=[ 4113], 20.00th=[ 4113], 00:19:17.435 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4752], 60.00th=[ 4817], 00:19:17.435 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 4948], 95.00th=[ 5014], 00:19:17.435 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 5211], 99.95th=[ 5932], 00:19:17.435 | 99.99th=[ 7767] 00:19:17.435 bw ( KiB/s): min=27972, max=28056, per=100.00%, avg=28031.56, stdev=29.91, samples=9 00:19:17.435 iops : min=27972, max=28056, avg=28031.56, stdev=29.91, samples=9 00:19:17.435 lat (msec) : 2=0.05%, 4=1.46%, 10=98.49% 00:19:17.435 cpu : usr=6.51%, sys=14.95%, ctx=75060, majf=0, minf=32 00:19:17.435 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:17.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.435 issued rwts: total=140114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.435 00:19:17.435 Run status group 0 (all jobs): 00:19:17.435 READ: bw=27.3MiB/s (28.7MB/s), 27.3MiB/s-27.3MiB/s (28.7MB/s-28.7MB/s), io=137MiB (143MB), run=5005-5005msec 00:19:17.435 00:19:17.435 Disk stats (read/write): 00:19:17.435 sda: ios=136892/0, merge=0/0, ticks=526298/0, in_queue=526298, util=98.13% 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:19:17.435 "tick_rate": 2100000000, 00:19:17.435 "ticks": 2214544544928, 00:19:17.435 "bdevs": [ 00:19:17.435 { 00:19:17.435 "name": "Malloc0", 00:19:17.435 "bytes_read": 862890496, 00:19:17.435 "num_read_ops": 841639, 00:19:17.435 "bytes_written": 0, 00:19:17.435 "num_write_ops": 0, 00:19:17.435 "bytes_unmapped": 0, 00:19:17.435 "num_unmap_ops": 0, 00:19:17.435 "bytes_copied": 0, 00:19:17.435 "num_copy_ops": 0, 00:19:17.435 "read_latency_ticks": 1182927877608, 00:19:17.435 "max_read_latency_ticks": 6980806, 00:19:17.435 "min_read_latency_ticks": 10920, 00:19:17.435 "write_latency_ticks": 0, 00:19:17.435 "max_write_latency_ticks": 0, 00:19:17.435 "min_write_latency_ticks": 0, 00:19:17.435 "unmap_latency_ticks": 0, 00:19:17.435 "max_unmap_latency_ticks": 0, 00:19:17.435 "min_unmap_latency_ticks": 0, 00:19:17.435 "copy_latency_ticks": 0, 00:19:17.435 "max_copy_latency_ticks": 0, 00:19:17.435 "min_copy_latency_ticks": 0, 00:19:17.435 "io_error": {} 00:19:17.435 } 00:19:17.435 ] 00:19:17.435 }' 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=841639 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=862890496 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=28022 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=28695347 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@117 -- # verify_qos_limits 28022 28000 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=28022 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=28000 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:19:17.435 I/O rate limiting tests successful 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@119 -- # echo 'I/O rate limiting tests successful' 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@122 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 --rw_mbytes_per_sec 27 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@123 -- # run_fio Malloc0 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:19:17.435 "tick_rate": 2100000000, 00:19:17.435 "ticks": 2214821975012, 00:19:17.435 "bdevs": [ 00:19:17.435 { 00:19:17.435 "name": "Malloc0", 00:19:17.435 "bytes_read": 862890496, 00:19:17.435 "num_read_ops": 841639, 00:19:17.435 "bytes_written": 0, 00:19:17.435 "num_write_ops": 0, 00:19:17.435 "bytes_unmapped": 0, 00:19:17.435 "num_unmap_ops": 0, 00:19:17.435 "bytes_copied": 0, 00:19:17.435 "num_copy_ops": 0, 00:19:17.435 "read_latency_ticks": 1182927877608, 00:19:17.435 "max_read_latency_ticks": 6980806, 00:19:17.435 "min_read_latency_ticks": 10920, 00:19:17.435 "write_latency_ticks": 0, 00:19:17.435 "max_write_latency_ticks": 0, 00:19:17.435 "min_write_latency_ticks": 0, 00:19:17.435 "unmap_latency_ticks": 0, 00:19:17.435 "max_unmap_latency_ticks": 0, 00:19:17.435 "min_unmap_latency_ticks": 0, 00:19:17.435 "copy_latency_ticks": 0, 00:19:17.435 "max_copy_latency_ticks": 0, 00:19:17.435 "min_copy_latency_ticks": 0, 00:19:17.435 "io_error": {} 00:19:17.435 } 00:19:17.435 ] 00:19:17.435 }' 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=841639 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=862890496 00:19:17.435 22:20:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:19:17.435 [global] 00:19:17.435 thread=1 00:19:17.435 invalidate=1 00:19:17.435 rw=randread 00:19:17.435 time_based=1 00:19:17.435 runtime=5 00:19:17.435 ioengine=libaio 00:19:17.435 direct=1 00:19:17.435 bs=1024 00:19:17.435 iodepth=128 00:19:17.435 norandommap=1 00:19:17.435 numjobs=1 00:19:17.436 00:19:17.436 [job0] 00:19:17.436 filename=/dev/sda 00:19:17.436 queue_depth set to 113 (sda) 00:19:17.695 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:19:17.695 fio-3.35 00:19:17.695 Starting 1 thread 00:19:22.973 00:19:22.973 job0: (groupid=0, jobs=1): err= 0: pid=91801: Tue Jul 23 22:20:54 2024 00:19:22.974 read: IOPS=27.6k, BW=27.0MiB/s (28.3MB/s)(135MiB/5005msec) 00:19:22.974 slat (usec): min=2, max=1343, avg=34.24, stdev=151.16 00:19:22.974 clat (usec): min=691, max=8791, avg=4594.59, stdev=450.08 00:19:22.974 lat (usec): min=696, max=8799, avg=4628.83, stdev=444.69 00:19:22.974 clat percentiles (usec): 00:19:22.974 | 1.00th=[ 3556], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4146], 00:19:22.974 | 30.00th=[ 4178], 40.00th=[ 4359], 50.00th=[ 4817], 60.00th=[ 4883], 00:19:22.974 | 70.00th=[ 4948], 80.00th=[ 5014], 90.00th=[ 5014], 95.00th=[ 5080], 00:19:22.974 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 5800], 99.95th=[ 6652], 00:19:22.974 | 99.99th=[ 8717] 00:19:22.974 bw ( KiB/s): min=27648, max=27720, per=100.00%, avg=27687.33, stdev=21.49, samples=9 00:19:22.974 iops : min=27648, max=27720, avg=27687.33, stdev=21.49, samples=9 00:19:22.974 lat (usec) : 750=0.01%, 1000=0.01% 00:19:22.974 lat (msec) : 2=0.03%, 4=4.00%, 10=95.95% 00:19:22.974 cpu : usr=5.24%, sys=11.53%, ctx=74667, majf=0, minf=32 00:19:22.974 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:22.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:22.974 issued rwts: total=138352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:22.974 00:19:22.974 Run status group 0 (all jobs): 00:19:22.974 READ: bw=27.0MiB/s (28.3MB/s), 27.0MiB/s-27.0MiB/s (28.3MB/s-28.3MB/s), io=135MiB (142MB), run=5005-5005msec 00:19:22.974 00:19:22.974 Disk stats (read/write): 00:19:22.974 sda: ios=135252/0, merge=0/0, ticks=530707/0, in_queue=530707, util=98.13% 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:19:22.974 "tick_rate": 2100000000, 00:19:22.974 "ticks": 2226262221994, 00:19:22.974 "bdevs": [ 00:19:22.974 { 00:19:22.974 "name": "Malloc0", 00:19:22.974 "bytes_read": 1004562944, 00:19:22.974 "num_read_ops": 979991, 00:19:22.974 "bytes_written": 0, 00:19:22.974 "num_write_ops": 0, 00:19:22.974 "bytes_unmapped": 0, 00:19:22.974 "num_unmap_ops": 0, 00:19:22.974 "bytes_copied": 0, 00:19:22.974 "num_copy_ops": 0, 00:19:22.974 "read_latency_ticks": 1695896230340, 00:19:22.974 "max_read_latency_ticks": 6980806, 00:19:22.974 "min_read_latency_ticks": 10920, 00:19:22.974 "write_latency_ticks": 0, 00:19:22.974 "max_write_latency_ticks": 0, 00:19:22.974 "min_write_latency_ticks": 0, 00:19:22.974 "unmap_latency_ticks": 0, 00:19:22.974 "max_unmap_latency_ticks": 0, 00:19:22.974 "min_unmap_latency_ticks": 0, 00:19:22.974 "copy_latency_ticks": 0, 00:19:22.974 "max_copy_latency_ticks": 0, 00:19:22.974 "min_copy_latency_ticks": 0, 00:19:22.974 "io_error": {} 00:19:22.974 } 00:19:22.974 ] 00:19:22.974 }' 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=979991 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=1004562944 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=27670 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=28334489 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@124 -- # verify_qos_limits 28334489 28311552 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=28334489 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=28311552 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@127 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 0 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@128 -- # run_fio Malloc0 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:19:22.974 "tick_rate": 2100000000, 00:19:22.974 "ticks": 2226523798114, 00:19:22.974 "bdevs": [ 00:19:22.974 { 00:19:22.974 "name": "Malloc0", 00:19:22.974 "bytes_read": 1004562944, 00:19:22.974 "num_read_ops": 979991, 00:19:22.974 "bytes_written": 0, 00:19:22.974 "num_write_ops": 0, 00:19:22.974 "bytes_unmapped": 0, 00:19:22.974 "num_unmap_ops": 0, 00:19:22.974 "bytes_copied": 0, 00:19:22.974 "num_copy_ops": 0, 00:19:22.974 "read_latency_ticks": 1695896230340, 00:19:22.974 "max_read_latency_ticks": 6980806, 00:19:22.974 "min_read_latency_ticks": 10920, 00:19:22.974 "write_latency_ticks": 0, 00:19:22.974 "max_write_latency_ticks": 0, 00:19:22.974 "min_write_latency_ticks": 0, 00:19:22.974 "unmap_latency_ticks": 0, 00:19:22.974 "max_unmap_latency_ticks": 0, 00:19:22.974 "min_unmap_latency_ticks": 0, 00:19:22.974 "copy_latency_ticks": 0, 00:19:22.974 "max_copy_latency_ticks": 0, 00:19:22.974 "min_copy_latency_ticks": 0, 00:19:22.974 "io_error": {} 00:19:22.974 } 00:19:22.974 ] 00:19:22.974 }' 00:19:22.974 22:20:54 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:19:22.974 22:20:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=979991 00:19:22.974 22:20:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:19:22.974 22:20:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=1004562944 00:19:22.974 22:20:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:19:22.974 [global] 00:19:22.974 thread=1 00:19:22.974 invalidate=1 00:19:22.974 rw=randread 00:19:22.974 time_based=1 00:19:22.974 runtime=5 00:19:22.974 ioengine=libaio 00:19:22.974 direct=1 00:19:22.974 bs=1024 00:19:22.974 iodepth=128 00:19:22.974 norandommap=1 00:19:22.974 numjobs=1 00:19:22.974 00:19:22.974 [job0] 00:19:22.974 filename=/dev/sda 00:19:22.974 queue_depth set to 113 (sda) 00:19:23.234 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:19:23.234 fio-3.35 00:19:23.234 Starting 1 thread 00:19:28.510 00:19:28.510 job0: (groupid=0, jobs=1): err= 0: pid=91890: Tue Jul 23 22:21:00 2024 00:19:28.510 read: IOPS=55.6k, BW=54.3MiB/s (56.9MB/s)(271MiB/5002msec) 00:19:28.510 slat (nsec): min=1953, max=670966, avg=16670.56, stdev=46287.35 00:19:28.510 clat (usec): min=1375, max=4055, avg=2285.94, stdev=95.98 00:19:28.510 lat (usec): min=1384, max=4060, avg=2302.61, stdev=84.93 00:19:28.510 clat percentiles (usec): 00:19:28.510 | 1.00th=[ 2057], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2245], 00:19:28.510 | 30.00th=[ 2245], 40.00th=[ 2278], 50.00th=[ 2278], 60.00th=[ 2278], 00:19:28.510 | 70.00th=[ 2278], 80.00th=[ 2311], 90.00th=[ 2376], 95.00th=[ 2474], 00:19:28.510 | 99.00th=[ 2606], 99.50th=[ 2671], 99.90th=[ 2769], 99.95th=[ 3130], 00:19:28.510 | 99.99th=[ 3687] 00:19:28.510 bw ( KiB/s): min=54506, max=56350, per=100.00%, avg=55631.56, stdev=580.00, samples=9 00:19:28.510 iops : min=54506, max=56350, avg=55631.56, stdev=580.17, samples=9 00:19:28.510 lat (msec) : 2=0.12%, 4=99.88%, 10=0.01% 00:19:28.510 cpu : usr=7.16%, sys=19.12%, ctx=232321, majf=0, minf=32 00:19:28.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:28.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.510 issued rwts: total=277966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.510 00:19:28.510 Run status group 0 (all jobs): 00:19:28.510 READ: bw=54.3MiB/s (56.9MB/s), 54.3MiB/s-54.3MiB/s (56.9MB/s-56.9MB/s), io=271MiB (285MB), run=5002-5002msec 00:19:28.510 00:19:28.510 Disk stats (read/write): 00:19:28.510 sda: ios=271820/0, merge=0/0, ticks=533011/0, in_queue=533011, util=98.08% 00:19:28.510 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:28.510 22:21:00 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.510 22:21:00 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:28.510 22:21:00 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.510 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:19:28.510 "tick_rate": 2100000000, 00:19:28.510 "ticks": 2237915667704, 00:19:28.510 "bdevs": [ 00:19:28.510 { 00:19:28.510 "name": "Malloc0", 00:19:28.510 "bytes_read": 1289200128, 00:19:28.510 "num_read_ops": 1257957, 00:19:28.510 "bytes_written": 0, 00:19:28.510 "num_write_ops": 0, 00:19:28.510 "bytes_unmapped": 0, 00:19:28.510 "num_unmap_ops": 0, 00:19:28.510 "bytes_copied": 0, 00:19:28.510 "num_copy_ops": 0, 00:19:28.510 "read_latency_ticks": 1750677292714, 00:19:28.510 "max_read_latency_ticks": 6980806, 00:19:28.510 "min_read_latency_ticks": 10920, 00:19:28.510 "write_latency_ticks": 0, 00:19:28.510 "max_write_latency_ticks": 0, 00:19:28.510 "min_write_latency_ticks": 0, 00:19:28.510 "unmap_latency_ticks": 0, 00:19:28.510 "max_unmap_latency_ticks": 0, 00:19:28.510 "min_unmap_latency_ticks": 0, 00:19:28.510 "copy_latency_ticks": 0, 00:19:28.510 "max_copy_latency_ticks": 0, 00:19:28.510 "min_copy_latency_ticks": 0, 00:19:28.510 "io_error": {} 00:19:28.510 } 00:19:28.510 ] 00:19:28.510 }' 00:19:28.510 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:19:28.510 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=1257957 00:19:28.510 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:19:28.510 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=1289200128 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=55593 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=56927436 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@129 -- # '[' 56927436 -gt 28311552 ']' 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@132 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 27 --r_mbytes_per_sec 13 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@133 -- # run_fio Malloc0 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:19:28.511 "tick_rate": 2100000000, 00:19:28.511 "ticks": 2238184586916, 00:19:28.511 "bdevs": [ 00:19:28.511 { 00:19:28.511 "name": "Malloc0", 00:19:28.511 "bytes_read": 1289200128, 00:19:28.511 "num_read_ops": 1257957, 00:19:28.511 "bytes_written": 0, 00:19:28.511 "num_write_ops": 0, 00:19:28.511 "bytes_unmapped": 0, 00:19:28.511 "num_unmap_ops": 0, 00:19:28.511 "bytes_copied": 0, 00:19:28.511 "num_copy_ops": 0, 00:19:28.511 "read_latency_ticks": 1750677292714, 00:19:28.511 "max_read_latency_ticks": 6980806, 00:19:28.511 "min_read_latency_ticks": 10920, 00:19:28.511 "write_latency_ticks": 0, 00:19:28.511 "max_write_latency_ticks": 0, 00:19:28.511 "min_write_latency_ticks": 0, 00:19:28.511 "unmap_latency_ticks": 0, 00:19:28.511 "max_unmap_latency_ticks": 0, 00:19:28.511 "min_unmap_latency_ticks": 0, 00:19:28.511 "copy_latency_ticks": 0, 00:19:28.511 "max_copy_latency_ticks": 0, 00:19:28.511 "min_copy_latency_ticks": 0, 00:19:28.511 "io_error": {} 00:19:28.511 } 00:19:28.511 ] 00:19:28.511 }' 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=1257957 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=1289200128 00:19:28.511 22:21:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:19:28.511 [global] 00:19:28.511 thread=1 00:19:28.511 invalidate=1 00:19:28.511 rw=randread 00:19:28.511 time_based=1 00:19:28.511 runtime=5 00:19:28.511 ioengine=libaio 00:19:28.511 direct=1 00:19:28.511 bs=1024 00:19:28.511 iodepth=128 00:19:28.511 norandommap=1 00:19:28.511 numjobs=1 00:19:28.511 00:19:28.511 [job0] 00:19:28.511 filename=/dev/sda 00:19:28.511 queue_depth set to 113 (sda) 00:19:28.770 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:19:28.770 fio-3.35 00:19:28.770 Starting 1 thread 00:19:34.042 00:19:34.042 job0: (groupid=0, jobs=1): err= 0: pid=91975: Tue Jul 23 22:21:05 2024 00:19:34.042 read: IOPS=13.3k, BW=13.0MiB/s (13.6MB/s)(65.1MiB/5008msec) 00:19:34.042 slat (usec): min=3, max=1762, avg=71.85, stdev=220.61 00:19:34.042 clat (usec): min=2222, max=17826, avg=9542.00, stdev=519.57 00:19:34.042 lat (usec): min=2246, max=17838, avg=9613.84, stdev=512.84 00:19:34.042 clat percentiles (usec): 00:19:34.042 | 1.00th=[ 8586], 5.00th=[ 8979], 10.00th=[ 9110], 20.00th=[ 9110], 00:19:34.042 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[ 9896], 00:19:34.042 | 70.00th=[ 9896], 80.00th=[10028], 90.00th=[10028], 95.00th=[10028], 00:19:34.042 | 99.00th=[10159], 99.50th=[10159], 99.90th=[12780], 99.95th=[14877], 00:19:34.042 | 99.99th=[16909] 00:19:34.042 bw ( KiB/s): min=13284, max=13344, per=100.00%, avg=13329.33, stdev=19.57, samples=9 00:19:34.042 iops : min=13284, max=13344, avg=13329.33, stdev=19.57, samples=9 00:19:34.042 lat (msec) : 4=0.06%, 10=87.21%, 20=12.73% 00:19:34.042 cpu : usr=3.91%, sys=10.33%, ctx=37267, majf=0, minf=32 00:19:34.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:34.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.042 issued rwts: total=66663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.042 00:19:34.042 Run status group 0 (all jobs): 00:19:34.042 READ: bw=13.0MiB/s (13.6MB/s), 13.0MiB/s-13.0MiB/s (13.6MB/s-13.6MB/s), io=65.1MiB (68.3MB), run=5008-5008msec 00:19:34.042 00:19:34.042 Disk stats (read/write): 00:19:34.042 sda: ios=65107/0, merge=0/0, ticks=543357/0, in_queue=543357, util=98.15% 00:19:34.042 22:21:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:19:34.042 22:21:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.042 22:21:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:34.042 22:21:05 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.042 22:21:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:19:34.042 "tick_rate": 2100000000, 00:19:34.042 "ticks": 2249585290378, 00:19:34.042 "bdevs": [ 00:19:34.042 { 00:19:34.042 "name": "Malloc0", 00:19:34.042 "bytes_read": 1357463040, 00:19:34.042 "num_read_ops": 1324620, 00:19:34.042 "bytes_written": 0, 00:19:34.042 "num_write_ops": 0, 00:19:34.042 "bytes_unmapped": 0, 00:19:34.042 "num_unmap_ops": 0, 00:19:34.042 "bytes_copied": 0, 00:19:34.042 "num_copy_ops": 0, 00:19:34.042 "read_latency_ticks": 2358766741582, 00:19:34.042 "max_read_latency_ticks": 11655662, 00:19:34.042 "min_read_latency_ticks": 10920, 00:19:34.042 "write_latency_ticks": 0, 00:19:34.042 "max_write_latency_ticks": 0, 00:19:34.042 "min_write_latency_ticks": 0, 00:19:34.042 "unmap_latency_ticks": 0, 00:19:34.042 "max_unmap_latency_ticks": 0, 00:19:34.042 "min_unmap_latency_ticks": 0, 00:19:34.042 "copy_latency_ticks": 0, 00:19:34.042 "max_copy_latency_ticks": 0, 00:19:34.042 "min_copy_latency_ticks": 0, 00:19:34.042 "io_error": {} 00:19:34.042 } 00:19:34.042 ] 00:19:34.042 }' 00:19:34.042 22:21:05 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=1324620 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=1357463040 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=13332 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=13652582 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@134 -- # verify_qos_limits 13652582 13631488 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=13652582 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=13631488 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:19:34.042 I/O bandwidth limiting tests successful 00:19:34.042 Cleaning up iSCSI connection 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@136 -- # echo 'I/O bandwidth limiting tests successful' 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@138 -- # iscsicleanup 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:19:34.042 Logging out of session [sid: 20, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:19:34.042 Logout of [sid: 20, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@983 -- # rm -rf 00:19:34.042 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@139 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:Target1 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@141 -- # rm -f ./local-job0-0-verify.state 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@143 -- # killprocess 91368 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@948 -- # '[' -z 91368 ']' 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@952 -- # kill -0 91368 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@953 -- # uname 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91368 00:19:34.043 killing process with pid 91368 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91368' 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@967 -- # kill 91368 00:19:34.043 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@972 -- # wait 91368 00:19:34.302 22:21:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@145 -- # iscsitestfini 00:19:34.302 22:21:06 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:19:34.302 00:19:34.302 real 0m41.162s 00:19:34.302 user 0m37.538s 00:19:34.302 sys 0m11.621s 00:19:34.302 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:34.302 ************************************ 00:19:34.302 END TEST iscsi_tgt_qos 00:19:34.302 ************************************ 00:19:34.302 22:21:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:19:34.561 22:21:06 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@39 -- # run_test iscsi_tgt_ip_migration /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:19:34.561 22:21:06 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:34.561 22:21:06 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.561 22:21:06 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:19:34.561 ************************************ 00:19:34.561 START TEST iscsi_tgt_ip_migration 00:19:34.561 ************************************ 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:19:34.561 * Looking for test storage... 00:19:34.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:34.561 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@11 -- # iscsitestinit 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@13 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@14 -- # pids=() 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@16 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:19:34.562 #define SPDK_CONFIG_H 00:19:34.562 #define SPDK_CONFIG_APPS 1 00:19:34.562 #define SPDK_CONFIG_ARCH native 00:19:34.562 #undef SPDK_CONFIG_ASAN 00:19:34.562 #undef SPDK_CONFIG_AVAHI 00:19:34.562 #undef SPDK_CONFIG_CET 00:19:34.562 #define SPDK_CONFIG_COVERAGE 1 00:19:34.562 #define SPDK_CONFIG_CROSS_PREFIX 00:19:34.562 #undef SPDK_CONFIG_CRYPTO 00:19:34.562 #undef SPDK_CONFIG_CRYPTO_MLX5 00:19:34.562 #undef SPDK_CONFIG_CUSTOMOCF 00:19:34.562 #undef SPDK_CONFIG_DAOS 00:19:34.562 #define SPDK_CONFIG_DAOS_DIR 00:19:34.562 #define SPDK_CONFIG_DEBUG 1 00:19:34.562 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:19:34.562 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:19:34.562 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:19:34.562 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:19:34.562 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:19:34.562 #undef SPDK_CONFIG_DPDK_UADK 00:19:34.562 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:34.562 #define SPDK_CONFIG_EXAMPLES 1 00:19:34.562 #undef SPDK_CONFIG_FC 00:19:34.562 #define SPDK_CONFIG_FC_PATH 00:19:34.562 #define SPDK_CONFIG_FIO_PLUGIN 1 00:19:34.562 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:19:34.562 #undef SPDK_CONFIG_FUSE 00:19:34.562 #undef SPDK_CONFIG_FUZZER 00:19:34.562 #define SPDK_CONFIG_FUZZER_LIB 00:19:34.562 #undef SPDK_CONFIG_GOLANG 00:19:34.562 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:19:34.562 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:19:34.562 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:19:34.562 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:19:34.562 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:19:34.562 #undef SPDK_CONFIG_HAVE_LIBBSD 00:19:34.562 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:19:34.562 #define SPDK_CONFIG_IDXD 1 00:19:34.562 #define SPDK_CONFIG_IDXD_KERNEL 1 00:19:34.562 #undef SPDK_CONFIG_IPSEC_MB 00:19:34.562 #define SPDK_CONFIG_IPSEC_MB_DIR 00:19:34.562 #define SPDK_CONFIG_ISAL 1 00:19:34.562 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:19:34.562 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:19:34.562 #define SPDK_CONFIG_LIBDIR 00:19:34.562 #undef SPDK_CONFIG_LTO 00:19:34.562 #define SPDK_CONFIG_MAX_LCORES 128 00:19:34.562 #define SPDK_CONFIG_NVME_CUSE 1 00:19:34.562 #undef SPDK_CONFIG_OCF 00:19:34.562 #define SPDK_CONFIG_OCF_PATH 00:19:34.562 #define SPDK_CONFIG_OPENSSL_PATH 00:19:34.562 #undef SPDK_CONFIG_PGO_CAPTURE 00:19:34.562 #define SPDK_CONFIG_PGO_DIR 00:19:34.562 #undef SPDK_CONFIG_PGO_USE 00:19:34.562 #define SPDK_CONFIG_PREFIX /usr/local 00:19:34.562 #undef SPDK_CONFIG_RAID5F 00:19:34.562 #undef SPDK_CONFIG_RBD 00:19:34.562 #define SPDK_CONFIG_RDMA 1 00:19:34.562 #define SPDK_CONFIG_RDMA_PROV verbs 00:19:34.562 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:19:34.562 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:19:34.562 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:19:34.562 #define SPDK_CONFIG_SHARED 1 00:19:34.562 #undef SPDK_CONFIG_SMA 00:19:34.562 #define SPDK_CONFIG_TESTS 1 00:19:34.562 #undef SPDK_CONFIG_TSAN 00:19:34.562 #define SPDK_CONFIG_UBLK 1 00:19:34.562 #define SPDK_CONFIG_UBSAN 1 00:19:34.562 #undef SPDK_CONFIG_UNIT_TESTS 00:19:34.562 #define SPDK_CONFIG_URING 1 00:19:34.562 #define SPDK_CONFIG_URING_PATH 00:19:34.562 #define SPDK_CONFIG_URING_ZNS 1 00:19:34.562 #undef SPDK_CONFIG_USDT 00:19:34.562 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:19:34.562 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:19:34.562 #undef SPDK_CONFIG_VFIO_USER 00:19:34.562 #define SPDK_CONFIG_VFIO_USER_DIR 00:19:34.562 #define SPDK_CONFIG_VHOST 1 00:19:34.562 #define SPDK_CONFIG_VIRTIO 1 00:19:34.562 #undef SPDK_CONFIG_VTUNE 00:19:34.562 #define SPDK_CONFIG_VTUNE_DIR 00:19:34.562 #define SPDK_CONFIG_WERROR 1 00:19:34.562 #define SPDK_CONFIG_WPDK_DIR 00:19:34.562 #undef SPDK_CONFIG_XNVME 00:19:34.562 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@17 -- # NETMASK=127.0.0.0/24 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@18 -- # MIGRATION_ADDRESS=127.0.0.2 00:19:34.562 Running ip migration tests 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@56 -- # echo 'Running ip migration tests' 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@57 -- # timing_enter start_iscsi_tgt_0 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:34.562 Process pid: 92107 00:19:34.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@58 -- # rpc_first_addr=/var/tmp/spdk0.sock 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@59 -- # iscsi_tgt_start /var/tmp/spdk0.sock 1 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=92107 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 92107' 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -m 1 --wait-for-rpc 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 92107 /var/tmp/spdk0.sock 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@829 -- # '[' -z 92107 ']' 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk0.sock 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.562 22:21:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:34.562 [2024-07-23 22:21:06.749545] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:19:34.562 [2024-07-23 22:21:06.749906] [ DPDK EAL parameters: iscsi --no-shconf -c 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92107 ] 00:19:34.822 [2024-07-23 22:21:06.879503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:34.822 [2024-07-23 22:21:06.888935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.822 [2024-07-23 22:21:06.937047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@862 -- # return 0 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_set_options -o 30 -a 64 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk0.sock framework_start_init 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:35.760 [2024-07-23 22:21:07.714375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.760 iscsi_tgt is listening. Running tests... 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk0.sock bdev_malloc_create 64 512 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:35.760 Malloc0 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@60 -- # timing_exit start_iscsi_tgt_0 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@62 -- # timing_enter start_iscsi_tgt_1 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@63 -- # rpc_second_addr=/var/tmp/spdk1.sock 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@64 -- # iscsi_tgt_start /var/tmp/spdk1.sock 2 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=92140 00:19:35.760 Process pid: 92140 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 92140' 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -m 2 --wait-for-rpc 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 92140 /var/tmp/spdk1.sock 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@829 -- # '[' -z 92140 ']' 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk1.sock 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.760 22:21:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:36.020 [2024-07-23 22:21:07.987885] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:19:36.020 [2024-07-23 22:21:07.987995] [ DPDK EAL parameters: iscsi --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92140 ] 00:19:36.020 [2024-07-23 22:21:08.114096] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:36.020 [2024-07-23 22:21:08.133747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.020 [2024-07-23 22:21:08.189700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.028 22:21:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.028 22:21:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@862 -- # return 0 00:19:37.028 22:21:08 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_set_options -o 30 -a 64 00:19:37.028 22:21:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.028 22:21:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:37.028 22:21:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.028 22:21:08 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk1.sock framework_start_init 00:19:37.028 22:21:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.028 22:21:08 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:37.028 [2024-07-23 22:21:08.912404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.028 iscsi_tgt is listening. Running tests... 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk1.sock bdev_malloc_create 64 512 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:37.028 Malloc0 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@65 -- # timing_exit start_iscsi_tgt_1 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@67 -- # rpc_add_target_node /var/tmp/spdk0.sock 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:19:37.028 22:21:09 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@69 -- # sleep 1 00:19:37.965 22:21:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@70 -- # iscsiadm -m discovery -t sendtargets -p 127.0.0.2:3260 00:19:38.225 127.0.0.2:3260,1 iqn.2016-06.io.spdk:target1 00:19:38.225 22:21:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@71 -- # sleep 1 00:19:39.163 22:21:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@72 -- # iscsiadm -m node --login -p 127.0.0.2:3260 00:19:39.163 Logging in to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:19:39.163 Login to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:19:39.163 22:21:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@73 -- # waitforiscsidevices 1 00:19:39.163 22:21:11 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@116 -- # local num=1 00:19:39.163 22:21:11 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:19:39.163 22:21:11 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:19:39.163 22:21:11 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:19:39.163 22:21:11 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:19:39.163 [2024-07-23 22:21:11.209381] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:39.163 22:21:11 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # n=1 00:19:39.163 22:21:11 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:19:39.163 22:21:11 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@123 -- # return 0 00:19:39.163 22:21:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@77 -- # fiopid=92218 00:19:39.163 22:21:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 32 -t randrw -r 12 00:19:39.163 22:21:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@78 -- # sleep 3 00:19:39.163 [global] 00:19:39.163 thread=1 00:19:39.163 invalidate=1 00:19:39.163 rw=randrw 00:19:39.163 time_based=1 00:19:39.163 runtime=12 00:19:39.163 ioengine=libaio 00:19:39.163 direct=1 00:19:39.163 bs=4096 00:19:39.163 iodepth=32 00:19:39.163 norandommap=1 00:19:39.163 numjobs=1 00:19:39.163 00:19:39.163 [job0] 00:19:39.163 filename=/dev/sda 00:19:39.163 queue_depth set to 113 (sda) 00:19:39.423 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 00:19:39.423 fio-3.35 00:19:39.423 Starting 1 thread 00:19:39.423 [2024-07-23 22:21:11.410535] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@80 -- # rpc_cmd -s /var/tmp/spdk0.sock spdk_kill_instance SIGTERM 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@81 -- # wait 92107 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@83 -- # rpc_add_target_node /var/tmp/spdk1.sock 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:19:42.713 22:21:14 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@85 -- # wait 92218 00:19:52.696 [2024-07-23 22:21:23.522419] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:52.696 00:19:52.696 job0: (groupid=0, jobs=1): err= 0: pid=92247: Tue Jul 23 22:21:23 2024 00:19:52.696 read: IOPS=18.9k, BW=74.0MiB/s (77.6MB/s)(888MiB/12001msec) 00:19:52.696 slat (usec): min=2, max=839, avg= 4.26, stdev= 3.65 00:19:52.696 clat (usec): min=188, max=2007.5k, avg=867.80, stdev=17855.78 00:19:52.696 lat (usec): min=240, max=2007.5k, avg=872.06, stdev=17855.85 00:19:52.696 clat percentiles (usec): 00:19:52.696 | 1.00th=[ 453], 5.00th=[ 510], 10.00th=[ 586], 20.00th=[ 644], 00:19:52.696 | 30.00th=[ 660], 40.00th=[ 676], 50.00th=[ 701], 60.00th=[ 717], 00:19:52.696 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 865], 95.00th=[ 914], 00:19:52.696 | 99.00th=[ 979], 99.50th=[ 1012], 99.90th=[ 1303], 99.95th=[ 1729], 00:19:52.696 | 99.99th=[ 4752] 00:19:52.696 bw ( KiB/s): min=37672, max=92664, per=100.00%, avg=86634.00, stdev=14382.47, samples=20 00:19:52.696 iops : min= 9418, max=23166, avg=21658.60, stdev=3595.65, samples=20 00:19:52.696 write: IOPS=18.9k, BW=74.0MiB/s (77.6MB/s)(888MiB/12001msec); 0 zone resets 00:19:52.696 slat (usec): min=2, max=381, avg= 4.25, stdev= 3.03 00:19:52.696 clat (usec): min=176, max=2007.4k, avg=811.85, stdev=15744.09 00:19:52.696 lat (usec): min=213, max=2007.5k, avg=816.10, stdev=15744.15 00:19:52.696 clat percentiles (usec): 00:19:52.696 | 1.00th=[ 429], 5.00th=[ 510], 10.00th=[ 570], 20.00th=[ 611], 00:19:52.696 | 30.00th=[ 627], 40.00th=[ 652], 50.00th=[ 668], 60.00th=[ 693], 00:19:52.696 | 70.00th=[ 725], 80.00th=[ 783], 90.00th=[ 857], 95.00th=[ 889], 00:19:52.696 | 99.00th=[ 955], 99.50th=[ 988], 99.90th=[ 1156], 99.95th=[ 1598], 00:19:52.696 | 99.99th=[ 4293] 00:19:52.696 bw ( KiB/s): min=37808, max=92952, per=100.00%, avg=86584.00, stdev=14486.54, samples=20 00:19:52.696 iops : min= 9452, max=23238, avg=21646.00, stdev=3621.64, samples=20 00:19:52.696 lat (usec) : 250=0.01%, 500=4.44%, 750=69.17%, 1000=25.86% 00:19:52.696 lat (msec) : 2=0.48%, 4=0.02%, 10=0.01%, >=2000=0.01% 00:19:52.696 cpu : usr=7.28%, sys=15.26%, ctx=35768, majf=0, minf=1 00:19:52.696 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 00:19:52.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:52.696 issued rwts: total=227324,227404,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.696 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:52.696 00:19:52.696 Run status group 0 (all jobs): 00:19:52.696 READ: bw=74.0MiB/s (77.6MB/s), 74.0MiB/s-74.0MiB/s (77.6MB/s-77.6MB/s), io=888MiB (931MB), run=12001-12001msec 00:19:52.696 WRITE: bw=74.0MiB/s (77.6MB/s), 74.0MiB/s-74.0MiB/s (77.6MB/s-77.6MB/s), io=888MiB (931MB), run=12001-12001msec 00:19:52.696 00:19:52.696 Disk stats (read/write): 00:19:52.696 sda: ios=224838/224844, merge=0/0, ticks=180737/174365, in_queue=355103, util=99.34% 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@87 -- # trap - SIGINT SIGTERM EXIT 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@89 -- # iscsicleanup 00:19:52.696 Cleaning up iSCSI connection 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:19:52.696 Logging out of session [sid: 21, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:19:52.696 Logout of [sid: 21, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@983 -- # rm -rf 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@91 -- # rpc_cmd -s /var/tmp/spdk1.sock spdk_kill_instance SIGTERM 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@92 -- # wait 92140 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@93 -- # iscsitestfini 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:19:52.696 00:19:52.696 real 0m17.426s 00:19:52.696 user 0m22.519s 00:19:52.696 sys 0m4.373s 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:52.696 22:21:23 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:52.696 ************************************ 00:19:52.696 END TEST iscsi_tgt_ip_migration 00:19:52.696 ************************************ 00:19:52.696 22:21:24 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@40 -- # run_test iscsi_tgt_trace_record /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:19:52.696 22:21:24 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:52.696 22:21:24 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.696 22:21:24 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:19:52.696 ************************************ 00:19:52.696 START TEST iscsi_tgt_trace_record 00:19:52.696 ************************************ 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:19:52.696 * Looking for test storage... 00:19:52.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:52.696 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@11 -- # iscsitestinit 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@13 -- # TRACE_TMP_FOLDER=./tmp-trace 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@14 -- # TRACE_RECORD_OUTPUT=./tmp-trace/record.trace 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@15 -- # TRACE_RECORD_NOTICE_LOG=./tmp-trace/record.notice 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@16 -- # TRACE_TOOL_LOG=./tmp-trace/trace.log 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@22 -- # '[' -z 10.0.0.1 ']' 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@27 -- # '[' -z 10.0.0.2 ']' 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@32 -- # NUM_TRACE_ENTRIES=4096 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@33 -- # MALLOC_BDEV_SIZE=64 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@34 -- # MALLOC_BLOCK_SIZE=4096 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@36 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@37 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@39 -- # timing_enter start_iscsi_tgt 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:19:52.697 start iscsi_tgt with trace enabled 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@41 -- # echo 'start iscsi_tgt with trace enabled' 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@43 -- # iscsi_pid=92442 00:19:52.697 Process pid: 92442 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@44 -- # echo 'Process pid: 92442' 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@46 -- # trap 'killprocess $iscsi_pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@48 -- # waitforlisten 92442 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@829 -- # '[' -z 92442 ']' 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@42 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xf --num-trace-entries 4096 --tpoint-group all 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:52.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:52.697 22:21:24 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:19:52.697 [2024-07-23 22:21:24.199386] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:19:52.697 [2024-07-23 22:21:24.199493] [ DPDK EAL parameters: iscsi --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92442 ] 00:19:52.697 [2024-07-23 22:21:24.327809] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:52.697 [2024-07-23 22:21:24.344366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.697 [2024-07-23 22:21:24.393264] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask all specified. 00:19:52.697 [2024-07-23 22:21:24.393323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s iscsi -p 92442' to capture a snapshot of events at runtime. 00:19:52.697 [2024-07-23 22:21:24.393333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.697 [2024-07-23 22:21:24.393341] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.697 [2024-07-23 22:21:24.393348] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/iscsi_trace.pid92442 for offline analysis/debug. 00:19:52.697 [2024-07-23 22:21:24.393496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.697 [2024-07-23 22:21:24.393694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.697 [2024-07-23 22:21:24.393757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.697 [2024-07-23 22:21:24.394252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.697 [2024-07-23 22:21:24.436092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@862 -- # return 0 00:19:53.265 iscsi_tgt is listening. Running tests... 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@50 -- # echo 'iscsi_tgt is listening. Running tests...' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@52 -- # timing_exit start_iscsi_tgt 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@54 -- # mkdir -p ./tmp-trace 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@56 -- # record_pid=92477 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace_record -s iscsi -p 92442 -f ./tmp-trace/record.trace -q 00:19:53.265 Trace record pid: 92477 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@57 -- # echo 'Trace record pid: 92477' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@59 -- # RPCS= 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@60 -- # RPCS+='iscsi_create_portal_group 1 10.0.0.1:3260\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@61 -- # RPCS+='iscsi_create_initiator_group 2 ANY 10.0.0.2/32\n' 00:19:53.265 Create bdevs and target nodes 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@63 -- # echo 'Create bdevs and target nodes' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@64 -- # CONNECTION_NUMBER=15 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # seq 0 15 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc0\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target0 Target0_alias Malloc0:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc1\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target1 Target1_alias Malloc1:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc2\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target2 Target2_alias Malloc2:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc3\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target3 Target3_alias Malloc3:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc4\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target4 Target4_alias Malloc4:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc5\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target5 Target5_alias Malloc5:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc6\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target6 Target6_alias Malloc6:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc7\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target7 Target7_alias Malloc7:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc8\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target8 Target8_alias Malloc8:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc9\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target9 Target9_alias Malloc9:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc10\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target10 Target10_alias Malloc10:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc11\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target11 Target11_alias Malloc11:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc12\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target12 Target12_alias Malloc12:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc13\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target13 Target13_alias Malloc13:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc14\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target14 Target14_alias Malloc14:0 1:2 256 -d\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc15\n' 00:19:53.265 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target15 Target15_alias Malloc15:0 1:2 256 -d\n' 00:19:53.266 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # echo -e iscsi_create_portal_group 1 '10.0.0.1:3260\niscsi_create_initiator_group' 2 ANY '10.0.0.2/32\nbdev_malloc_create' 64 4096 -b 'Malloc0\niscsi_create_target_node' Target0 Target0_alias Malloc0:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc1\niscsi_create_target_node' Target1 Target1_alias Malloc1:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc2\niscsi_create_target_node' Target2 Target2_alias Malloc2:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc3\niscsi_create_target_node' Target3 Target3_alias Malloc3:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc4\niscsi_create_target_node' Target4 Target4_alias Malloc4:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc5\niscsi_create_target_node' Target5 Target5_alias Malloc5:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc6\niscsi_create_target_node' Target6 Target6_alias Malloc6:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc7\niscsi_create_target_node' Target7 Target7_alias Malloc7:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc8\niscsi_create_target_node' Target8 Target8_alias Malloc8:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc9\niscsi_create_target_node' Target9 Target9_alias Malloc9:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc10\niscsi_create_target_node' Target10 Target10_alias Malloc10:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc11\niscsi_create_target_node' Target11 Target11_alias Malloc11:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc12\niscsi_create_target_node' Target12 Target12_alias Malloc12:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc13\niscsi_create_target_node' Target13 Target13_alias Malloc13:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc14\niscsi_create_target_node' Target14 Target14_alias Malloc14:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc15\niscsi_create_target_node' Target15 Target15_alias Malloc15:0 1:2 256 '-d\n' 00:19:53.266 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:53.832 Malloc0 00:19:53.832 Malloc1 00:19:53.832 Malloc2 00:19:53.832 Malloc3 00:19:53.832 Malloc4 00:19:53.832 Malloc5 00:19:53.832 Malloc6 00:19:53.832 Malloc7 00:19:53.832 Malloc8 00:19:53.832 Malloc9 00:19:53.832 Malloc10 00:19:53.832 Malloc11 00:19:53.832 Malloc12 00:19:53.832 Malloc13 00:19:53.832 Malloc14 00:19:53.832 Malloc15 00:19:53.832 22:21:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@71 -- # sleep 1 00:19:54.766 22:21:26 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@73 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:19:54.766 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target0 00:19:54.766 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:19:54.766 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:19:54.766 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:19:54.766 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:19:54.766 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:19:54.766 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:19:54.766 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:19:54.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:19:54.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:19:54.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:19:54.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:19:54.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:19:54.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:19:54.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:19:54.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:19:54.767 22:21:26 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@74 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:19:54.767 [2024-07-23 22:21:26.930365] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:54.767 [2024-07-23 22:21:26.944049] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.025 [2024-07-23 22:21:26.972153] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.025 [2024-07-23 22:21:26.996263] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.025 [2024-07-23 22:21:27.027632] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.025 [2024-07-23 22:21:27.055260] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.025 [2024-07-23 22:21:27.096841] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.025 [2024-07-23 22:21:27.117234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.025 [2024-07-23 22:21:27.148432] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.025 [2024-07-23 22:21:27.176712] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.025 [2024-07-23 22:21:27.201895] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.284 [2024-07-23 22:21:27.227726] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.284 [2024-07-23 22:21:27.256936] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.284 [2024-07-23 22:21:27.270460] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.284 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:19:55.284 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:19:55.284 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:19:55.284 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:19:55.285 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:19:55.285 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:19:55.285 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:19:55.285 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:19:55.285 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:19:55.285 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:19:55.285 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:19:55.285 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:19:55.285 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:19:55.285 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:19:55.285 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:19:55.285 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:19:55.285 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:19:55.285 22:21:27 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@75 -- # waitforiscsidevices 16 00:19:55.285 22:21:27 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@116 -- # local num=16 00:19:55.285 22:21:27 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:19:55.285 22:21:27 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:19:55.285 [2024-07-23 22:21:27.299417] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.285 22:21:27 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:19:55.285 22:21:27 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:19:55.285 [2024-07-23 22:21:27.303035] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.285 22:21:27 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # n=16 00:19:55.285 22:21:27 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@120 -- # '[' 16 -ne 16 ']' 00:19:55.285 22:21:27 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@123 -- # return 0 00:19:55.285 22:21:27 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@77 -- # trap 'iscsicleanup; killprocess $iscsi_pid; killprocess $record_pid; delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:55.285 Running FIO 00:19:55.285 22:21:27 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@79 -- # echo 'Running FIO' 00:19:55.285 22:21:27 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 00:19:55.285 [global] 00:19:55.285 thread=1 00:19:55.285 invalidate=1 00:19:55.285 rw=randrw 00:19:55.285 time_based=1 00:19:55.285 runtime=1 00:19:55.285 ioengine=libaio 00:19:55.285 direct=1 00:19:55.285 bs=131072 00:19:55.285 iodepth=32 00:19:55.285 norandommap=1 00:19:55.285 numjobs=1 00:19:55.285 00:19:55.285 [job0] 00:19:55.285 filename=/dev/sda 00:19:55.285 [job1] 00:19:55.285 filename=/dev/sdb 00:19:55.285 [job2] 00:19:55.285 filename=/dev/sdc 00:19:55.285 [job3] 00:19:55.285 filename=/dev/sdd 00:19:55.285 [job4] 00:19:55.285 filename=/dev/sde 00:19:55.285 [job5] 00:19:55.285 filename=/dev/sdf 00:19:55.285 [job6] 00:19:55.285 filename=/dev/sdg 00:19:55.285 [job7] 00:19:55.285 filename=/dev/sdh 00:19:55.285 [job8] 00:19:55.285 filename=/dev/sdi 00:19:55.285 [job9] 00:19:55.285 filename=/dev/sdj 00:19:55.285 [job10] 00:19:55.285 filename=/dev/sdk 00:19:55.285 [job11] 00:19:55.285 filename=/dev/sdl 00:19:55.285 [job12] 00:19:55.285 filename=/dev/sdm 00:19:55.285 [job13] 00:19:55.285 filename=/dev/sdn 00:19:55.285 [job14] 00:19:55.285 filename=/dev/sdp 00:19:55.285 [job15] 00:19:55.285 filename=/dev/sdo 00:19:55.544 queue_depth set to 113 (sda) 00:19:55.544 queue_depth set to 113 (sdb) 00:19:55.544 queue_depth set to 113 (sdc) 00:19:55.544 queue_depth set to 113 (sdd) 00:19:55.544 queue_depth set to 113 (sde) 00:19:55.803 queue_depth set to 113 (sdf) 00:19:55.803 queue_depth set to 113 (sdg) 00:19:55.803 queue_depth set to 113 (sdh) 00:19:55.803 queue_depth set to 113 (sdi) 00:19:55.803 queue_depth set to 113 (sdj) 00:19:55.803 queue_depth set to 113 (sdk) 00:19:55.803 queue_depth set to 113 (sdl) 00:19:55.803 queue_depth set to 113 (sdm) 00:19:55.803 queue_depth set to 113 (sdn) 00:19:55.803 queue_depth set to 113 (sdp) 00:19:55.803 queue_depth set to 113 (sdo) 00:19:56.062 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:56.062 fio-3.35 00:19:56.062 Starting 16 threads 00:19:56.062 [2024-07-23 22:21:28.120017] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.123902] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.127529] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.131689] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.134078] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.135943] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.137519] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.139754] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.141873] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.143646] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.145248] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.147621] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.149277] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.150906] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.152643] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:56.062 [2024-07-23 22:21:28.154918] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.475926] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.478126] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.480315] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.482980] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.485438] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.487585] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.489613] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.492095] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.494675] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.496757] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.498761] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.500709] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.502812] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.505203] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.507296] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 [2024-07-23 22:21:29.509243] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.441 00:19:57.441 job0: (groupid=0, jobs=1): err= 0: pid=92844: Tue Jul 23 22:21:29 2024 00:19:57.441 read: IOPS=572, BW=71.5MiB/s (75.0MB/s)(73.4MiB/1026msec) 00:19:57.441 slat (usec): min=7, max=1477, avg=28.48, stdev=88.05 00:19:57.441 clat (usec): min=1671, max=30283, avg=6938.36, stdev=2122.13 00:19:57.441 lat (usec): min=1702, max=30293, avg=6966.83, stdev=2120.24 00:19:57.441 clat percentiles (usec): 00:19:57.441 | 1.00th=[ 3032], 5.00th=[ 5800], 10.00th=[ 6063], 20.00th=[ 6259], 00:19:57.441 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6652], 60.00th=[ 6849], 00:19:57.441 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7570], 95.00th=[ 8094], 00:19:57.441 | 99.00th=[12518], 99.50th=[28705], 99.90th=[30278], 99.95th=[30278], 00:19:57.441 | 99.99th=[30278] 00:19:57.442 bw ( KiB/s): min=74240, max=75008, per=6.36%, avg=74624.00, stdev=543.06, samples=2 00:19:57.442 iops : min= 580, max= 586, avg=583.00, stdev= 4.24, samples=2 00:19:57.442 write: IOPS=612, BW=76.5MiB/s (80.2MB/s)(78.5MiB/1026msec); 0 zone resets 00:19:57.442 slat (usec): min=9, max=815, avg=33.65, stdev=66.23 00:19:57.442 clat (usec): min=6701, max=70718, avg=45659.25, stdev=5604.84 00:19:57.442 lat (usec): min=6719, max=70751, avg=45692.90, stdev=5608.63 00:19:57.442 clat percentiles (usec): 00:19:57.442 | 1.00th=[21890], 5.00th=[38536], 10.00th=[41157], 20.00th=[43254], 00:19:57.442 | 30.00th=[44827], 40.00th=[45351], 50.00th=[46400], 60.00th=[46924], 00:19:57.442 | 70.00th=[47973], 80.00th=[49021], 90.00th=[50070], 95.00th=[51643], 00:19:57.442 | 99.00th=[56886], 99.50th=[64750], 99.90th=[70779], 99.95th=[70779], 00:19:57.442 | 99.99th=[70779] 00:19:57.442 bw ( KiB/s): min=74752, max=79104, per=6.40%, avg=76928.00, stdev=3077.33, samples=2 00:19:57.442 iops : min= 584, max= 618, avg=601.00, stdev=24.04, samples=2 00:19:57.442 lat (msec) : 2=0.08%, 4=0.41%, 10=46.50%, 20=1.40%, 50=45.51% 00:19:57.442 lat (msec) : 100=6.09% 00:19:57.442 cpu : usr=0.88%, sys=1.95%, ctx=1162, majf=0, minf=1 00:19:57.442 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.4%, >=64=0.0% 00:19:57.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.442 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.442 issued rwts: total=587,628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.442 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.442 job1: (groupid=0, jobs=1): err= 0: pid=92845: Tue Jul 23 22:21:29 2024 00:19:57.442 read: IOPS=540, BW=67.5MiB/s (70.8MB/s)(70.0MiB/1037msec) 00:19:57.442 slat (usec): min=6, max=1426, avg=19.63, stdev=62.85 00:19:57.442 clat (usec): min=3743, max=43533, avg=7385.35, stdev=3605.66 00:19:57.442 lat (usec): min=4828, max=43551, avg=7404.98, stdev=3602.82 00:19:57.442 clat percentiles (usec): 00:19:57.442 | 1.00th=[ 5538], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6456], 00:19:57.442 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:19:57.442 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7832], 95.00th=[ 9634], 00:19:57.442 | 99.00th=[38011], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:19:57.442 | 99.99th=[43779] 00:19:57.442 bw ( KiB/s): min=70770, max=71054, per=6.04%, avg=70912.00, stdev=200.82, samples=2 00:19:57.442 iops : min= 552, max= 555, avg=553.50, stdev= 2.12, samples=2 00:19:57.442 write: IOPS=588, BW=73.5MiB/s (77.1MB/s)(76.2MiB/1037msec); 0 zone resets 00:19:57.442 slat (usec): min=8, max=590, avg=26.55, stdev=44.24 00:19:57.442 clat (usec): min=15537, max=82504, avg=47470.61, stdev=5796.53 00:19:57.442 lat (usec): min=15549, max=82517, avg=47497.16, stdev=5803.27 00:19:57.442 clat percentiles (usec): 00:19:57.442 | 1.00th=[28181], 5.00th=[42206], 10.00th=[43254], 20.00th=[44303], 00:19:57.442 | 30.00th=[45351], 40.00th=[46400], 50.00th=[47449], 60.00th=[47973], 00:19:57.442 | 70.00th=[49021], 80.00th=[49546], 90.00th=[51643], 95.00th=[53216], 00:19:57.442 | 99.00th=[71828], 99.50th=[78119], 99.90th=[82314], 99.95th=[82314], 00:19:57.442 | 99.99th=[82314] 00:19:57.442 bw ( KiB/s): min=71281, max=78236, per=6.22%, avg=74758.50, stdev=4917.93, samples=2 00:19:57.442 iops : min= 556, max= 611, avg=583.50, stdev=38.89, samples=2 00:19:57.442 lat (msec) : 4=0.09%, 10=45.90%, 20=1.62%, 50=42.82%, 100=9.57% 00:19:57.442 cpu : usr=1.06%, sys=1.45%, ctx=1123, majf=0, minf=1 00:19:57.442 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=97.4%, >=64=0.0% 00:19:57.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.442 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.442 issued rwts: total=560,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.442 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.442 job2: (groupid=0, jobs=1): err= 0: pid=92851: Tue Jul 23 22:21:29 2024 00:19:57.442 read: IOPS=588, BW=73.5MiB/s (77.1MB/s)(76.1MiB/1035msec) 00:19:57.442 slat (usec): min=7, max=343, avg=15.02, stdev=21.07 00:19:57.442 clat (usec): min=3464, max=38724, avg=7197.47, stdev=2262.84 00:19:57.442 lat (usec): min=3474, max=38734, avg=7212.49, stdev=2262.65 00:19:57.442 clat percentiles (usec): 00:19:57.442 | 1.00th=[ 5211], 5.00th=[ 6063], 10.00th=[ 6325], 20.00th=[ 6521], 00:19:57.442 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7177], 00:19:57.442 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 8029], 00:19:57.442 | 99.00th=[10552], 99.50th=[12256], 99.90th=[38536], 99.95th=[38536], 00:19:57.442 | 99.99th=[38536] 00:19:57.442 bw ( KiB/s): min=77157, max=77824, per=6.60%, avg=77490.50, stdev=471.64, samples=2 00:19:57.442 iops : min= 602, max= 608, avg=605.00, stdev= 4.24, samples=2 00:19:57.442 write: IOPS=586, BW=73.3MiB/s (76.9MB/s)(75.9MiB/1035msec); 0 zone resets 00:19:57.442 slat (usec): min=8, max=766, avg=22.99, stdev=42.09 00:19:57.442 clat (usec): min=10358, max=75391, avg=47182.27, stdev=5946.34 00:19:57.442 lat (usec): min=10388, max=75414, avg=47205.26, stdev=5947.60 00:19:57.442 clat percentiles (usec): 00:19:57.442 | 1.00th=[22676], 5.00th=[40633], 10.00th=[42206], 20.00th=[44303], 00:19:57.442 | 30.00th=[45876], 40.00th=[46924], 50.00th=[47449], 60.00th=[48497], 00:19:57.442 | 70.00th=[49546], 80.00th=[50070], 90.00th=[51643], 95.00th=[53216], 00:19:57.442 | 99.00th=[67634], 99.50th=[70779], 99.90th=[74974], 99.95th=[74974], 00:19:57.442 | 99.99th=[74974] 00:19:57.442 bw ( KiB/s): min=71680, max=76135, per=6.15%, avg=73907.50, stdev=3150.16, samples=2 00:19:57.442 iops : min= 560, max= 594, avg=577.00, stdev=24.04, samples=2 00:19:57.442 lat (msec) : 4=0.49%, 10=48.77%, 20=0.99%, 50=38.49%, 100=11.27% 00:19:57.442 cpu : usr=0.48%, sys=1.64%, ctx=1194, majf=0, minf=1 00:19:57.442 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.5%, >=64=0.0% 00:19:57.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.442 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.442 issued rwts: total=609,607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.442 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.442 job3: (groupid=0, jobs=1): err= 0: pid=92863: Tue Jul 23 22:21:29 2024 00:19:57.442 read: IOPS=591, BW=73.9MiB/s (77.5MB/s)(78.2MiB/1059msec) 00:19:57.442 slat (usec): min=7, max=567, avg=16.65, stdev=36.42 00:19:57.442 clat (usec): min=725, max=63709, avg=7090.27, stdev=4700.55 00:19:57.442 lat (usec): min=738, max=63728, avg=7106.92, stdev=4700.67 00:19:57.442 clat percentiles (usec): 00:19:57.442 | 1.00th=[ 2114], 5.00th=[ 5014], 10.00th=[ 5800], 20.00th=[ 6194], 00:19:57.442 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6783], 00:19:57.442 | 70.00th=[ 6915], 80.00th=[ 7111], 90.00th=[ 7898], 95.00th=[ 9372], 00:19:57.442 | 99.00th=[15401], 99.50th=[60556], 99.90th=[63701], 99.95th=[63701], 00:19:57.442 | 99.99th=[63701] 00:19:57.442 bw ( KiB/s): min=74347, max=84736, per=6.78%, avg=79541.50, stdev=7346.13, samples=2 00:19:57.442 iops : min= 580, max= 662, avg=621.00, stdev=57.98, samples=2 00:19:57.442 write: IOPS=619, BW=77.4MiB/s (81.2MB/s)(82.0MiB/1059msec); 0 zone resets 00:19:57.442 slat (usec): min=9, max=469, avg=22.28, stdev=30.03 00:19:57.442 clat (usec): min=1356, max=101738, avg=44717.41, stdev=12593.02 00:19:57.442 lat (usec): min=1407, max=101756, avg=44739.69, stdev=12594.29 00:19:57.442 clat percentiles (msec): 00:19:57.442 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 41], 20.00th=[ 44], 00:19:57.442 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 47], 00:19:57.442 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 52], 95.00th=[ 56], 00:19:57.442 | 99.00th=[ 91], 99.50th=[ 95], 99.90th=[ 103], 99.95th=[ 103], 00:19:57.442 | 99.99th=[ 103] 00:19:57.442 bw ( KiB/s): min=78435, max=82432, per=6.70%, avg=80433.50, stdev=2826.31, samples=2 00:19:57.442 iops : min= 612, max= 644, avg=628.00, stdev=22.63, samples=2 00:19:57.442 lat (usec) : 750=0.08%, 1000=0.16% 00:19:57.442 lat (msec) : 2=0.31%, 4=1.79%, 10=46.18%, 20=3.98%, 50=39.63% 00:19:57.442 lat (msec) : 100=7.72%, 250=0.16% 00:19:57.442 cpu : usr=0.85%, sys=1.61%, ctx=1239, majf=0, minf=1 00:19:57.442 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=97.6%, >=64=0.0% 00:19:57.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.442 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.442 issued rwts: total=626,656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.442 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.442 job4: (groupid=0, jobs=1): err= 0: pid=92870: Tue Jul 23 22:21:29 2024 00:19:57.442 read: IOPS=606, BW=75.8MiB/s (79.4MB/s)(78.9MiB/1041msec) 00:19:57.442 slat (usec): min=6, max=815, avg=19.96, stdev=50.63 00:19:57.442 clat (usec): min=3455, max=45263, avg=7070.69, stdev=2782.47 00:19:57.442 lat (usec): min=3463, max=45272, avg=7090.66, stdev=2780.93 00:19:57.442 clat percentiles (usec): 00:19:57.442 | 1.00th=[ 4113], 5.00th=[ 5735], 10.00th=[ 5997], 20.00th=[ 6325], 00:19:57.442 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6783], 00:19:57.442 | 70.00th=[ 6980], 80.00th=[ 7308], 90.00th=[ 8029], 95.00th=[ 9503], 00:19:57.442 | 99.00th=[11600], 99.50th=[13173], 99.90th=[45351], 99.95th=[45351], 00:19:57.443 | 99.99th=[45351] 00:19:57.443 bw ( KiB/s): min=76902, max=83879, per=6.85%, avg=80390.50, stdev=4933.48, samples=2 00:19:57.443 iops : min= 600, max= 655, avg=627.50, stdev=38.89, samples=2 00:19:57.443 write: IOPS=612, BW=76.6MiB/s (80.3MB/s)(79.8MiB/1041msec); 0 zone resets 00:19:57.443 slat (usec): min=9, max=491, avg=24.11, stdev=35.35 00:19:57.443 clat (usec): min=8363, max=89679, avg=45045.18, stdev=8772.48 00:19:57.443 lat (usec): min=8380, max=89693, avg=45069.29, stdev=8772.78 00:19:57.443 clat percentiles (usec): 00:19:57.443 | 1.00th=[ 8455], 5.00th=[35914], 10.00th=[40633], 20.00th=[43254], 00:19:57.443 | 30.00th=[43779], 40.00th=[44827], 50.00th=[45876], 60.00th=[46924], 00:19:57.443 | 70.00th=[47449], 80.00th=[48497], 90.00th=[50070], 95.00th=[51119], 00:19:57.443 | 99.00th=[73925], 99.50th=[84411], 99.90th=[89654], 99.95th=[89654], 00:19:57.443 | 99.99th=[89654] 00:19:57.443 bw ( KiB/s): min=77210, max=78946, per=6.50%, avg=78078.00, stdev=1227.54, samples=2 00:19:57.443 iops : min= 603, max= 616, avg=609.50, stdev= 9.19, samples=2 00:19:57.443 lat (msec) : 4=0.32%, 10=48.15%, 20=2.84%, 50=43.97%, 100=4.73% 00:19:57.443 cpu : usr=0.77%, sys=1.63%, ctx=1165, majf=0, minf=1 00:19:57.443 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.6%, >=64=0.0% 00:19:57.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.443 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.443 issued rwts: total=631,638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.443 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.443 job5: (groupid=0, jobs=1): err= 0: pid=92871: Tue Jul 23 22:21:29 2024 00:19:57.443 read: IOPS=594, BW=74.3MiB/s (77.9MB/s)(77.0MiB/1036msec) 00:19:57.443 slat (usec): min=7, max=1695, avg=23.25, stdev=97.82 00:19:57.443 clat (usec): min=2884, max=42411, avg=7165.10, stdev=2700.63 00:19:57.443 lat (usec): min=2894, max=42424, avg=7188.36, stdev=2699.36 00:19:57.443 clat percentiles (usec): 00:19:57.443 | 1.00th=[ 5145], 5.00th=[ 5932], 10.00th=[ 6194], 20.00th=[ 6456], 00:19:57.443 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6783], 60.00th=[ 6980], 00:19:57.443 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7767], 95.00th=[ 8455], 00:19:57.443 | 99.00th=[13042], 99.50th=[36963], 99.90th=[42206], 99.95th=[42206], 00:19:57.443 | 99.99th=[42206] 00:19:57.443 bw ( KiB/s): min=73728, max=83110, per=6.68%, avg=78419.00, stdev=6634.08, samples=2 00:19:57.443 iops : min= 576, max= 649, avg=612.50, stdev=51.62, samples=2 00:19:57.443 write: IOPS=592, BW=74.1MiB/s (77.7MB/s)(76.8MiB/1036msec); 0 zone resets 00:19:57.443 slat (usec): min=9, max=5660, avg=37.26, stdev=251.86 00:19:57.443 clat (usec): min=12054, max=82675, avg=46648.58, stdev=5841.83 00:19:57.443 lat (usec): min=12084, max=82698, avg=46685.84, stdev=5830.11 00:19:57.443 clat percentiles (usec): 00:19:57.443 | 1.00th=[25035], 5.00th=[40109], 10.00th=[42206], 20.00th=[43779], 00:19:57.443 | 30.00th=[44827], 40.00th=[45876], 50.00th=[46400], 60.00th=[47449], 00:19:57.443 | 70.00th=[48497], 80.00th=[49546], 90.00th=[51119], 95.00th=[52691], 00:19:57.443 | 99.00th=[65799], 99.50th=[79168], 99.90th=[82314], 99.95th=[82314], 00:19:57.443 | 99.99th=[82314] 00:19:57.443 bw ( KiB/s): min=71823, max=78336, per=6.25%, avg=75079.50, stdev=4605.39, samples=2 00:19:57.443 iops : min= 561, max= 612, avg=586.50, stdev=36.06, samples=2 00:19:57.443 lat (msec) : 4=0.16%, 10=48.54%, 20=1.38%, 50=42.52%, 100=7.40% 00:19:57.443 cpu : usr=0.97%, sys=1.64%, ctx=1157, majf=0, minf=1 00:19:57.443 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.5%, >=64=0.0% 00:19:57.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.443 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.443 issued rwts: total=616,614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.443 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.443 job6: (groupid=0, jobs=1): err= 0: pid=92873: Tue Jul 23 22:21:29 2024 00:19:57.443 read: IOPS=587, BW=73.4MiB/s (77.0MB/s)(76.8MiB/1045msec) 00:19:57.443 slat (usec): min=6, max=304, avg=14.85, stdev=21.04 00:19:57.443 clat (usec): min=629, max=50636, avg=7376.91, stdev=3843.19 00:19:57.443 lat (usec): min=637, max=50643, avg=7391.76, stdev=3842.14 00:19:57.443 clat percentiles (usec): 00:19:57.443 | 1.00th=[ 2180], 5.00th=[ 5997], 10.00th=[ 6259], 20.00th=[ 6456], 00:19:57.443 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7177], 00:19:57.443 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 9634], 00:19:57.443 | 99.00th=[11731], 99.50th=[45876], 99.90th=[50594], 99.95th=[50594], 00:19:57.443 | 99.99th=[50594] 00:19:57.443 bw ( KiB/s): min=73580, max=82176, per=6.64%, avg=77878.00, stdev=6078.29, samples=2 00:19:57.443 iops : min= 574, max= 642, avg=608.00, stdev=48.08, samples=2 00:19:57.443 write: IOPS=591, BW=73.9MiB/s (77.5MB/s)(77.2MiB/1045msec); 0 zone resets 00:19:57.443 slat (usec): min=9, max=788, avg=23.34, stdev=46.61 00:19:57.443 clat (usec): min=1299, max=87927, avg=46542.81, stdev=10236.69 00:19:57.443 lat (usec): min=1602, max=87943, avg=46566.14, stdev=10231.68 00:19:57.443 clat percentiles (usec): 00:19:57.443 | 1.00th=[11076], 5.00th=[17957], 10.00th=[40633], 20.00th=[44303], 00:19:57.443 | 30.00th=[45876], 40.00th=[46924], 50.00th=[47973], 60.00th=[49021], 00:19:57.443 | 70.00th=[50070], 80.00th=[51119], 90.00th=[53216], 95.00th=[55313], 00:19:57.443 | 99.00th=[76022], 99.50th=[82314], 99.90th=[87557], 99.95th=[87557], 00:19:57.443 | 99.99th=[87557] 00:19:57.443 bw ( KiB/s): min=75520, max=75880, per=6.30%, avg=75700.00, stdev=254.56, samples=2 00:19:57.443 iops : min= 590, max= 592, avg=591.00, stdev= 1.41, samples=2 00:19:57.443 lat (usec) : 750=0.08% 00:19:57.443 lat (msec) : 2=0.49%, 4=1.38%, 10=46.10%, 20=3.90%, 50=34.25% 00:19:57.443 lat (msec) : 100=13.80% 00:19:57.443 cpu : usr=0.48%, sys=1.63%, ctx=1176, majf=0, minf=1 00:19:57.443 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:19:57.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.443 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.443 issued rwts: total=614,618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.443 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.443 job7: (groupid=0, jobs=1): err= 0: pid=92936: Tue Jul 23 22:21:29 2024 00:19:57.443 read: IOPS=532, BW=66.5MiB/s (69.8MB/s)(68.9MiB/1035msec) 00:19:57.443 slat (usec): min=6, max=2143, avg=22.55, stdev=96.06 00:19:57.443 clat (usec): min=954, max=38795, avg=6932.90, stdev=2576.49 00:19:57.443 lat (usec): min=963, max=38813, avg=6955.45, stdev=2573.26 00:19:57.443 clat percentiles (usec): 00:19:57.443 | 1.00th=[ 1647], 5.00th=[ 5735], 10.00th=[ 5997], 20.00th=[ 6259], 00:19:57.443 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6783], 00:19:57.443 | 70.00th=[ 6915], 80.00th=[ 7177], 90.00th=[ 7570], 95.00th=[ 8848], 00:19:57.443 | 99.00th=[13304], 99.50th=[35914], 99.90th=[38536], 99.95th=[38536], 00:19:57.443 | 99.99th=[38536] 00:19:57.443 bw ( KiB/s): min=65280, max=75008, per=5.98%, avg=70144.00, stdev=6878.73, samples=2 00:19:57.443 iops : min= 510, max= 586, avg=548.00, stdev=53.74, samples=2 00:19:57.443 write: IOPS=606, BW=75.8MiB/s (79.5MB/s)(78.5MiB/1035msec); 0 zone resets 00:19:57.443 slat (usec): min=7, max=342, avg=23.04, stdev=21.77 00:19:57.443 clat (usec): min=8638, max=79228, avg=46539.07, stdev=5701.02 00:19:57.443 lat (usec): min=8670, max=79261, avg=46562.12, stdev=5700.97 00:19:57.443 clat percentiles (usec): 00:19:57.443 | 1.00th=[22414], 5.00th=[41157], 10.00th=[42206], 20.00th=[43779], 00:19:57.443 | 30.00th=[44827], 40.00th=[45876], 50.00th=[46400], 60.00th=[47449], 00:19:57.443 | 70.00th=[48497], 80.00th=[49546], 90.00th=[51119], 95.00th=[52691], 00:19:57.443 | 99.00th=[64226], 99.50th=[70779], 99.90th=[79168], 99.95th=[79168], 00:19:57.443 | 99.99th=[79168] 00:19:57.443 bw ( KiB/s): min=74496, max=79104, per=6.39%, avg=76800.00, stdev=3258.35, samples=2 00:19:57.443 iops : min= 582, max= 618, avg=600.00, stdev=25.46, samples=2 00:19:57.443 lat (usec) : 1000=0.25% 00:19:57.443 lat (msec) : 2=0.25%, 4=0.34%, 10=44.19%, 20=1.87%, 50=44.53% 00:19:57.443 lat (msec) : 100=8.57% 00:19:57.443 cpu : usr=0.77%, sys=1.84%, ctx=1059, majf=0, minf=1 00:19:57.443 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=97.4%, >=64=0.0% 00:19:57.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.443 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.443 issued rwts: total=551,628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.443 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.443 job8: (groupid=0, jobs=1): err= 0: pid=92955: Tue Jul 23 22:21:29 2024 00:19:57.443 read: IOPS=565, BW=70.7MiB/s (74.2MB/s)(73.6MiB/1041msec) 00:19:57.443 slat (usec): min=7, max=2627, avg=25.76, stdev=118.34 00:19:57.443 clat (usec): min=1599, max=47209, avg=7380.50, stdev=3647.92 00:19:57.444 lat (usec): min=1613, max=47216, avg=7406.27, stdev=3641.86 00:19:57.444 clat percentiles (usec): 00:19:57.444 | 1.00th=[ 4752], 5.00th=[ 5932], 10.00th=[ 6063], 20.00th=[ 6259], 00:19:57.444 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:19:57.444 | 70.00th=[ 6980], 80.00th=[ 7308], 90.00th=[ 8225], 95.00th=[11207], 00:19:57.444 | 99.00th=[17171], 99.50th=[45351], 99.90th=[47449], 99.95th=[47449], 00:19:57.444 | 99.99th=[47449] 00:19:57.444 bw ( KiB/s): min=72336, max=77413, per=6.38%, avg=74874.50, stdev=3589.98, samples=2 00:19:57.444 iops : min= 565, max= 604, avg=584.50, stdev=27.58, samples=2 00:19:57.444 write: IOPS=604, BW=75.5MiB/s (79.2MB/s)(78.6MiB/1041msec); 0 zone resets 00:19:57.444 slat (usec): min=9, max=1867, avg=38.57, stdev=100.70 00:19:57.444 clat (usec): min=3218, max=87862, avg=45751.46, stdev=7612.60 00:19:57.444 lat (usec): min=3234, max=87883, avg=45790.03, stdev=7618.27 00:19:57.444 clat percentiles (usec): 00:19:57.444 | 1.00th=[17433], 5.00th=[39060], 10.00th=[41157], 20.00th=[43254], 00:19:57.444 | 30.00th=[44303], 40.00th=[45351], 50.00th=[46400], 60.00th=[46924], 00:19:57.444 | 70.00th=[47449], 80.00th=[48497], 90.00th=[50070], 95.00th=[51643], 00:19:57.444 | 99.00th=[76022], 99.50th=[84411], 99.90th=[87557], 99.95th=[87557], 00:19:57.444 | 99.99th=[87557] 00:19:57.444 bw ( KiB/s): min=75158, max=78946, per=6.42%, avg=77052.00, stdev=2678.52, samples=2 00:19:57.444 iops : min= 587, max= 616, avg=601.50, stdev=20.51, samples=2 00:19:57.444 lat (msec) : 2=0.25%, 4=0.33%, 10=44.42%, 20=4.19%, 50=44.91% 00:19:57.444 lat (msec) : 100=5.91% 00:19:57.444 cpu : usr=0.29%, sys=2.12%, ctx=1132, majf=0, minf=1 00:19:57.444 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.5%, >=64=0.0% 00:19:57.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.444 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.444 issued rwts: total=589,629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.444 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.444 job9: (groupid=0, jobs=1): err= 0: pid=92956: Tue Jul 23 22:21:29 2024 00:19:57.444 read: IOPS=618, BW=77.3MiB/s (81.0MB/s)(79.5MiB/1029msec) 00:19:57.444 slat (usec): min=7, max=407, avg=19.38, stdev=27.50 00:19:57.444 clat (usec): min=1354, max=34700, avg=7345.17, stdev=2894.52 00:19:57.444 lat (usec): min=1363, max=34712, avg=7364.55, stdev=2892.88 00:19:57.444 clat percentiles (usec): 00:19:57.444 | 1.00th=[ 4883], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6456], 00:19:57.444 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 7046], 00:19:57.444 | 70.00th=[ 7177], 80.00th=[ 7504], 90.00th=[ 8225], 95.00th=[ 9896], 00:19:57.444 | 99.00th=[28967], 99.50th=[32637], 99.90th=[34866], 99.95th=[34866], 00:19:57.444 | 99.99th=[34866] 00:19:57.444 bw ( KiB/s): min=73728, max=87470, per=6.87%, avg=80599.00, stdev=9717.06, samples=2 00:19:57.444 iops : min= 576, max= 683, avg=629.50, stdev=75.66, samples=2 00:19:57.444 write: IOPS=594, BW=74.3MiB/s (78.0MB/s)(76.5MiB/1029msec); 0 zone resets 00:19:57.444 slat (usec): min=11, max=535, avg=27.73, stdev=35.74 00:19:57.444 clat (usec): min=11673, max=71449, avg=46053.33, stdev=5629.41 00:19:57.444 lat (usec): min=11699, max=71471, avg=46081.06, stdev=5629.90 00:19:57.444 clat percentiles (usec): 00:19:57.444 | 1.00th=[23987], 5.00th=[38536], 10.00th=[41157], 20.00th=[43254], 00:19:57.444 | 30.00th=[44303], 40.00th=[45351], 50.00th=[46400], 60.00th=[47449], 00:19:57.444 | 70.00th=[48497], 80.00th=[49546], 90.00th=[50594], 95.00th=[51643], 00:19:57.444 | 99.00th=[65274], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828], 00:19:57.444 | 99.99th=[71828] 00:19:57.444 bw ( KiB/s): min=72080, max=78336, per=6.26%, avg=75208.00, stdev=4423.66, samples=2 00:19:57.444 iops : min= 563, max= 612, avg=587.50, stdev=34.65, samples=2 00:19:57.444 lat (msec) : 2=0.24%, 4=0.24%, 10=48.00%, 20=2.24%, 50=41.27% 00:19:57.444 lat (msec) : 100=8.01% 00:19:57.444 cpu : usr=1.07%, sys=2.04%, ctx=1139, majf=0, minf=1 00:19:57.444 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:19:57.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.444 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.444 issued rwts: total=636,612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.444 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.444 job10: (groupid=0, jobs=1): err= 0: pid=92957: Tue Jul 23 22:21:29 2024 00:19:57.444 read: IOPS=602, BW=75.3MiB/s (78.9MB/s)(77.8MiB/1033msec) 00:19:57.444 slat (usec): min=7, max=1946, avg=22.96, stdev=85.67 00:19:57.444 clat (usec): min=2764, max=36799, avg=7199.50, stdev=2449.72 00:19:57.444 lat (usec): min=2774, max=36821, avg=7222.46, stdev=2450.12 00:19:57.444 clat percentiles (usec): 00:19:57.444 | 1.00th=[ 5342], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6456], 00:19:57.444 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7177], 00:19:57.444 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 8225], 00:19:57.444 | 99.00th=[11076], 99.50th=[33817], 99.90th=[36963], 99.95th=[36963], 00:19:57.444 | 99.99th=[36963] 00:19:57.444 bw ( KiB/s): min=76032, max=82011, per=6.73%, avg=79021.50, stdev=4227.79, samples=2 00:19:57.444 iops : min= 594, max= 640, avg=617.00, stdev=32.53, samples=2 00:19:57.444 write: IOPS=587, BW=73.5MiB/s (77.0MB/s)(75.9MiB/1033msec); 0 zone resets 00:19:57.444 slat (usec): min=9, max=1697, avg=32.91, stdev=84.05 00:19:57.444 clat (usec): min=7440, max=79414, avg=46935.32, stdev=5928.42 00:19:57.444 lat (usec): min=7460, max=79432, avg=46968.23, stdev=5926.59 00:19:57.444 clat percentiles (usec): 00:19:57.444 | 1.00th=[20841], 5.00th=[40633], 10.00th=[43254], 20.00th=[44827], 00:19:57.444 | 30.00th=[45876], 40.00th=[46400], 50.00th=[46924], 60.00th=[47973], 00:19:57.444 | 70.00th=[49021], 80.00th=[49546], 90.00th=[51119], 95.00th=[52691], 00:19:57.444 | 99.00th=[63701], 99.50th=[71828], 99.90th=[79168], 99.95th=[79168], 00:19:57.444 | 99.99th=[79168] 00:19:57.444 bw ( KiB/s): min=72303, max=76032, per=6.18%, avg=74167.50, stdev=2636.80, samples=2 00:19:57.444 iops : min= 564, max= 594, avg=579.00, stdev=21.21, samples=2 00:19:57.444 lat (msec) : 4=0.24%, 10=49.31%, 20=1.22%, 50=40.20%, 100=9.03% 00:19:57.444 cpu : usr=0.39%, sys=2.23%, ctx=1179, majf=0, minf=1 00:19:57.444 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.5%, >=64=0.0% 00:19:57.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.444 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.444 issued rwts: total=622,607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.444 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.444 job11: (groupid=0, jobs=1): err= 0: pid=92958: Tue Jul 23 22:21:29 2024 00:19:57.444 read: IOPS=619, BW=77.5MiB/s (81.2MB/s)(79.5MiB/1026msec) 00:19:57.444 slat (usec): min=6, max=760, avg=17.45, stdev=37.82 00:19:57.444 clat (usec): min=2641, max=31427, avg=7060.64, stdev=2208.88 00:19:57.444 lat (usec): min=2650, max=31445, avg=7078.09, stdev=2217.94 00:19:57.444 clat percentiles (usec): 00:19:57.444 | 1.00th=[ 5211], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6325], 00:19:57.444 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6783], 00:19:57.444 | 70.00th=[ 6980], 80.00th=[ 7177], 90.00th=[ 7832], 95.00th=[ 8979], 00:19:57.444 | 99.00th=[13960], 99.50th=[26870], 99.90th=[31327], 99.95th=[31327], 00:19:57.444 | 99.99th=[31327] 00:19:57.444 bw ( KiB/s): min=77723, max=83968, per=6.89%, avg=80845.50, stdev=4415.88, samples=2 00:19:57.444 iops : min= 607, max= 656, avg=631.50, stdev=34.65, samples=2 00:19:57.444 write: IOPS=608, BW=76.0MiB/s (79.7MB/s)(78.0MiB/1026msec); 0 zone resets 00:19:57.444 slat (usec): min=8, max=2383, avg=25.11, stdev=98.53 00:19:57.444 clat (usec): min=9046, max=65420, avg=45299.87, stdev=5151.97 00:19:57.444 lat (usec): min=9074, max=65440, avg=45324.98, stdev=5135.88 00:19:57.444 clat percentiles (usec): 00:19:57.444 | 1.00th=[23462], 5.00th=[38536], 10.00th=[40109], 20.00th=[42730], 00:19:57.444 | 30.00th=[44303], 40.00th=[45351], 50.00th=[45876], 60.00th=[46924], 00:19:57.444 | 70.00th=[47449], 80.00th=[48497], 90.00th=[49546], 95.00th=[50594], 00:19:57.444 | 99.00th=[54789], 99.50th=[60031], 99.90th=[65274], 99.95th=[65274], 00:19:57.444 | 99.99th=[65274] 00:19:57.444 bw ( KiB/s): min=74132, max=78848, per=6.37%, avg=76490.00, stdev=3334.72, samples=2 00:19:57.444 iops : min= 579, max= 616, avg=597.50, stdev=26.16, samples=2 00:19:57.444 lat (msec) : 4=0.24%, 10=48.57%, 20=1.67%, 50=46.19%, 100=3.33% 00:19:57.444 cpu : usr=0.88%, sys=1.56%, ctx=1203, majf=0, minf=1 00:19:57.444 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:19:57.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.444 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.444 issued rwts: total=636,624,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.444 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.445 job12: (groupid=0, jobs=1): err= 0: pid=92959: Tue Jul 23 22:21:29 2024 00:19:57.445 read: IOPS=629, BW=78.6MiB/s (82.5MB/s)(81.0MiB/1030msec) 00:19:57.445 slat (usec): min=6, max=283, avg=17.99, stdev=29.92 00:19:57.445 clat (usec): min=2616, max=34165, avg=6970.98, stdev=2070.04 00:19:57.445 lat (usec): min=2626, max=34177, avg=6988.97, stdev=2069.64 00:19:57.445 clat percentiles (usec): 00:19:57.445 | 1.00th=[ 5145], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 6325], 00:19:57.445 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6652], 60.00th=[ 6783], 00:19:57.445 | 70.00th=[ 6980], 80.00th=[ 7177], 90.00th=[ 7570], 95.00th=[ 8160], 00:19:57.445 | 99.00th=[13042], 99.50th=[15270], 99.90th=[34341], 99.95th=[34341], 00:19:57.445 | 99.99th=[34341] 00:19:57.445 bw ( KiB/s): min=72336, max=92928, per=7.04%, avg=82632.00, stdev=14560.74, samples=2 00:19:57.445 iops : min= 565, max= 726, avg=645.50, stdev=113.84, samples=2 00:19:57.445 write: IOPS=602, BW=75.4MiB/s (79.0MB/s)(77.6MiB/1030msec); 0 zone resets 00:19:57.445 slat (usec): min=7, max=329, avg=22.49, stdev=26.59 00:19:57.445 clat (usec): min=9080, max=68393, avg=45663.17, stdev=5207.75 00:19:57.445 lat (usec): min=9122, max=68409, avg=45685.66, stdev=5206.60 00:19:57.445 clat percentiles (usec): 00:19:57.445 | 1.00th=[24249], 5.00th=[40633], 10.00th=[41681], 20.00th=[43254], 00:19:57.445 | 30.00th=[44303], 40.00th=[44827], 50.00th=[45351], 60.00th=[46400], 00:19:57.445 | 70.00th=[47449], 80.00th=[48497], 90.00th=[50594], 95.00th=[52167], 00:19:57.445 | 99.00th=[59507], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:19:57.445 | 99.99th=[68682] 00:19:57.445 bw ( KiB/s): min=74496, max=77210, per=6.32%, avg=75853.00, stdev=1919.09, samples=2 00:19:57.445 iops : min= 582, max= 603, avg=592.50, stdev=14.85, samples=2 00:19:57.445 lat (msec) : 4=0.24%, 10=49.01%, 20=1.89%, 50=42.79%, 100=6.07% 00:19:57.445 cpu : usr=0.68%, sys=1.46%, ctx=1183, majf=0, minf=1 00:19:57.445 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.6%, >=64=0.0% 00:19:57.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.445 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.445 issued rwts: total=648,621,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.445 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.445 job13: (groupid=0, jobs=1): err= 0: pid=92960: Tue Jul 23 22:21:29 2024 00:19:57.445 read: IOPS=581, BW=72.7MiB/s (76.3MB/s)(75.5MiB/1038msec) 00:19:57.445 slat (usec): min=6, max=498, avg=15.35, stdev=30.33 00:19:57.445 clat (usec): min=3260, max=42406, avg=7199.85, stdev=2601.24 00:19:57.445 lat (usec): min=3268, max=42416, avg=7215.20, stdev=2600.53 00:19:57.445 clat percentiles (usec): 00:19:57.445 | 1.00th=[ 5473], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:19:57.445 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6980], 00:19:57.445 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7767], 95.00th=[ 9241], 00:19:57.445 | 99.00th=[13173], 99.50th=[15139], 99.90th=[42206], 99.95th=[42206], 00:19:57.445 | 99.99th=[42206] 00:19:57.445 bw ( KiB/s): min=70541, max=83366, per=6.56%, avg=76953.50, stdev=9068.64, samples=2 00:19:57.445 iops : min= 551, max= 651, avg=601.00, stdev=70.71, samples=2 00:19:57.445 write: IOPS=591, BW=73.9MiB/s (77.5MB/s)(76.8MiB/1038msec); 0 zone resets 00:19:57.445 slat (usec): min=7, max=724, avg=26.35, stdev=60.91 00:19:57.445 clat (usec): min=13196, max=80938, avg=46857.46, stdev=6064.51 00:19:57.445 lat (usec): min=13212, max=80954, avg=46883.80, stdev=6068.74 00:19:57.445 clat percentiles (usec): 00:19:57.445 | 1.00th=[25822], 5.00th=[40633], 10.00th=[42206], 20.00th=[44303], 00:19:57.445 | 30.00th=[44827], 40.00th=[45876], 50.00th=[46400], 60.00th=[47449], 00:19:57.445 | 70.00th=[47973], 80.00th=[49546], 90.00th=[51119], 95.00th=[53740], 00:19:57.445 | 99.00th=[72877], 99.50th=[74974], 99.90th=[81265], 99.95th=[81265], 00:19:57.445 | 99.99th=[81265] 00:19:57.445 bw ( KiB/s): min=71823, max=77979, per=6.24%, avg=74901.00, stdev=4352.95, samples=2 00:19:57.445 iops : min= 561, max= 609, avg=585.00, stdev=33.94, samples=2 00:19:57.445 lat (msec) : 4=0.16%, 10=47.54%, 20=1.97%, 50=41.95%, 100=8.37% 00:19:57.445 cpu : usr=0.39%, sys=1.83%, ctx=1133, majf=0, minf=1 00:19:57.445 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.5%, >=64=0.0% 00:19:57.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.445 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.445 issued rwts: total=604,614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.445 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.445 job14: (groupid=0, jobs=1): err= 0: pid=92961: Tue Jul 23 22:21:29 2024 00:19:57.445 read: IOPS=562, BW=70.3MiB/s (73.7MB/s)(72.6MiB/1033msec) 00:19:57.445 slat (usec): min=7, max=389, avg=19.60, stdev=36.38 00:19:57.445 clat (usec): min=5295, max=37312, avg=7320.95, stdev=2939.59 00:19:57.445 lat (usec): min=5308, max=37325, avg=7340.55, stdev=2938.25 00:19:57.445 clat percentiles (usec): 00:19:57.445 | 1.00th=[ 5473], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6521], 00:19:57.445 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7111], 00:19:57.445 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 7963], 00:19:57.445 | 99.00th=[32900], 99.50th=[35390], 99.90th=[37487], 99.95th=[37487], 00:19:57.445 | 99.99th=[37487] 00:19:57.445 bw ( KiB/s): min=70284, max=76902, per=6.27%, avg=73593.00, stdev=4679.63, samples=2 00:19:57.445 iops : min= 549, max= 600, avg=574.50, stdev=36.06, samples=2 00:19:57.445 write: IOPS=585, BW=73.2MiB/s (76.8MB/s)(75.6MiB/1033msec); 0 zone resets 00:19:57.445 slat (usec): min=8, max=4229, avg=39.00, stdev=221.38 00:19:57.445 clat (usec): min=9964, max=78411, avg=47448.62, stdev=5936.27 00:19:57.445 lat (usec): min=9973, max=78423, avg=47487.62, stdev=5925.86 00:19:57.445 clat percentiles (usec): 00:19:57.445 | 1.00th=[22938], 5.00th=[42206], 10.00th=[43254], 20.00th=[44827], 00:19:57.445 | 30.00th=[45876], 40.00th=[46400], 50.00th=[47449], 60.00th=[47973], 00:19:57.445 | 70.00th=[49021], 80.00th=[50070], 90.00th=[52691], 95.00th=[54789], 00:19:57.445 | 99.00th=[69731], 99.50th=[74974], 99.90th=[78119], 99.95th=[78119], 00:19:57.445 | 99.99th=[78119] 00:19:57.445 bw ( KiB/s): min=72047, max=76184, per=6.17%, avg=74115.50, stdev=2925.30, samples=2 00:19:57.445 iops : min= 562, max= 595, avg=578.50, stdev=23.33, samples=2 00:19:57.445 lat (msec) : 10=47.64%, 20=1.26%, 50=40.05%, 100=11.05% 00:19:57.445 cpu : usr=0.87%, sys=1.45%, ctx=1076, majf=0, minf=1 00:19:57.445 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.4%, >=64=0.0% 00:19:57.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.445 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.445 issued rwts: total=581,605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.445 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.445 job15: (groupid=0, jobs=1): err= 0: pid=92962: Tue Jul 23 22:21:29 2024 00:19:57.445 read: IOPS=582, BW=72.8MiB/s (76.4MB/s)(75.0MiB/1030msec) 00:19:57.445 slat (usec): min=8, max=3440, avg=29.47, stdev=153.13 00:19:57.445 clat (usec): min=3610, max=34709, avg=7069.00, stdev=2287.10 00:19:57.445 lat (usec): min=3621, max=34720, avg=7098.48, stdev=2290.05 00:19:57.445 clat percentiles (usec): 00:19:57.445 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5997], 20.00th=[ 6325], 00:19:57.445 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:19:57.445 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7963], 95.00th=[ 8979], 00:19:57.445 | 99.00th=[12256], 99.50th=[30278], 99.90th=[34866], 99.95th=[34866], 00:19:57.445 | 99.99th=[34866] 00:19:57.445 bw ( KiB/s): min=73728, max=78848, per=6.50%, avg=76288.00, stdev=3620.39, samples=2 00:19:57.445 iops : min= 576, max= 616, avg=596.00, stdev=28.28, samples=2 00:19:57.445 write: IOPS=607, BW=76.0MiB/s (79.7MB/s)(78.2MiB/1030msec); 0 zone resets 00:19:57.445 slat (usec): min=11, max=1122, avg=26.89, stdev=50.04 00:19:57.445 clat (usec): min=9297, max=66862, avg=45717.14, stdev=5320.58 00:19:57.445 lat (usec): min=9329, max=66888, avg=45744.04, stdev=5320.38 00:19:57.445 clat percentiles (usec): 00:19:57.445 | 1.00th=[22938], 5.00th=[39060], 10.00th=[41157], 20.00th=[43254], 00:19:57.445 | 30.00th=[44303], 40.00th=[45351], 50.00th=[46400], 60.00th=[46924], 00:19:57.445 | 70.00th=[47973], 80.00th=[49021], 90.00th=[50070], 95.00th=[51643], 00:19:57.445 | 99.00th=[58459], 99.50th=[65274], 99.90th=[66847], 99.95th=[66847], 00:19:57.445 | 99.99th=[66847] 00:19:57.445 bw ( KiB/s): min=74240, max=79104, per=6.38%, avg=76672.00, stdev=3439.37, samples=2 00:19:57.445 iops : min= 580, max= 618, avg=599.00, stdev=26.87, samples=2 00:19:57.445 lat (msec) : 4=0.24%, 10=47.06%, 20=1.71%, 50=45.19%, 100=5.79% 00:19:57.445 cpu : usr=0.97%, sys=2.04%, ctx=1067, majf=0, minf=1 00:19:57.445 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=97.5%, >=64=0.0% 00:19:57.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.445 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:57.445 issued rwts: total=600,626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.445 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:57.446 00:19:57.446 Run status group 0 (all jobs): 00:19:57.446 READ: bw=1146MiB/s (1202MB/s), 66.5MiB/s-78.6MiB/s (69.8MB/s-82.5MB/s), io=1214MiB (1273MB), run=1026-1059msec 00:19:57.446 WRITE: bw=1173MiB/s (1230MB/s), 73.2MiB/s-77.4MiB/s (76.8MB/s-81.2MB/s), io=1242MiB (1302MB), run=1026-1059msec 00:19:57.446 00:19:57.446 Disk stats (read/write): 00:19:57.446 sda: ios=583/552, merge=0/0, ticks=3586/25046, in_queue=28633, util=76.30% 00:19:57.446 sdb: ios=559/538, merge=0/0, ticks=3581/25240, in_queue=28822, util=76.68% 00:19:57.446 sdc: ios=613/532, merge=0/0, ticks=3988/24851, in_queue=28840, util=78.61% 00:19:57.446 sdd: ios=636/596, merge=0/0, ticks=3945/25621, in_queue=29566, util=80.28% 00:19:57.446 sde: ios=641/574, merge=0/0, ticks=3974/25191, in_queue=29166, util=81.24% 00:19:57.446 sdf: ios=617/538, merge=0/0, ticks=3926/24720, in_queue=28646, util=79.12% 00:19:57.446 sdg: ios=582/555, merge=0/0, ticks=3922/25219, in_queue=29142, util=79.71% 00:19:57.446 sdh: ios=509/548, merge=0/0, ticks=3396/25222, in_queue=28619, util=80.23% 00:19:57.446 sdi: ios=546/566, merge=0/0, ticks=3662/25244, in_queue=28906, util=82.82% 00:19:57.446 sdj: ios=580/539, merge=0/0, ticks=4058/24705, in_queue=28764, util=82.76% 00:19:57.446 sdk: ios=573/532, merge=0/0, ticks=3964/24675, in_queue=28639, util=83.20% 00:19:57.446 sdl: ios=581/547, merge=0/0, ticks=3991/24804, in_queue=28795, util=84.43% 00:19:57.446 sdm: ios=590/547, merge=0/0, ticks=3998/24736, in_queue=28735, util=84.93% 00:19:57.446 sdn: ios=563/538, merge=0/0, ticks=3942/24720, in_queue=28663, util=85.70% 00:19:57.446 sdp: ios=542/533, merge=0/0, ticks=3769/25041, in_queue=28811, util=87.51% 00:19:57.446 sdo: ios=548/548, merge=0/0, ticks=3721/24965, in_queue=28687, util=87.39% 00:19:57.446 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@82 -- # iscsicleanup 00:19:57.446 Cleaning up iSCSI connection 00:19:57.446 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:19:57.446 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:19:58.014 Logging out of session [sid: 22, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 23, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 24, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 25, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 26, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 27, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 28, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 29, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 30, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 31, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 32, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 33, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 34, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 35, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 36, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:19:58.014 Logging out of session [sid: 37, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:19:58.014 Logout of [sid: 22, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 23, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 24, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 25, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 26, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 27, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 28, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 29, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 30, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 31, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 32, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 33, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 34, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 35, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 36, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:19:58.014 Logout of [sid: 37, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@983 -- # rm -rf 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@84 -- # RPCS= 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # seq 0 15 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target0\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc0\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target1\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc1\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target2\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc2\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target3\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc3\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target4\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc4\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target5\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc5\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target6\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc6\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target7\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc7\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target8\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc8\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target9\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc9\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target10\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc10\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target11\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc11\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target12\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc12\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target13\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc13\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target14\n' 00:19:58.014 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc14\n' 00:19:58.015 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:58.015 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target15\n' 00:19:58.015 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc15\n' 00:19:58.015 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:58.015 22:21:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # echo -e iscsi_delete_target_node 'iqn.2016-06.io.spdk:Target0\nbdev_malloc_delete' 'Malloc0\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target1\nbdev_malloc_delete' 'Malloc1\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target2\nbdev_malloc_delete' 'Malloc2\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target3\nbdev_malloc_delete' 'Malloc3\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target4\nbdev_malloc_delete' 'Malloc4\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target5\nbdev_malloc_delete' 'Malloc5\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target6\nbdev_malloc_delete' 'Malloc6\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target7\nbdev_malloc_delete' 'Malloc7\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target8\nbdev_malloc_delete' 'Malloc8\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target9\nbdev_malloc_delete' 'Malloc9\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target10\nbdev_malloc_delete' 'Malloc10\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target11\nbdev_malloc_delete' 'Malloc11\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target12\nbdev_malloc_delete' 'Malloc12\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target13\nbdev_malloc_delete' 'Malloc13\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target14\nbdev_malloc_delete' 'Malloc14\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target15\nbdev_malloc_delete' 'Malloc15\n' 00:19:58.621 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@92 -- # trap 'delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:58.621 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@94 -- # killprocess 92442 00:19:58.621 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@948 -- # '[' -z 92442 ']' 00:19:58.621 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@952 -- # kill -0 92442 00:19:58.621 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # uname 00:19:58.621 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.621 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92442 00:19:58.621 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:58.621 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:58.621 killing process with pid 92442 00:19:58.621 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92442' 00:19:58.621 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@967 -- # kill 92442 00:19:58.621 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@972 -- # wait 92442 00:19:58.888 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@95 -- # killprocess 92477 00:19:58.888 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@948 -- # '[' -z 92477 ']' 00:19:58.888 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@952 -- # kill -0 92477 00:19:58.888 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # uname 00:19:58.888 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.888 22:21:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92477 00:19:58.888 22:21:31 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # process_name=spdk_trace_reco 00:19:58.888 22:21:31 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@958 -- # '[' spdk_trace_reco = sudo ']' 00:19:58.888 killing process with pid 92477 00:19:58.888 22:21:31 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92477' 00:19:58.888 22:21:31 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@967 -- # kill 92477 00:19:58.888 22:21:31 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@972 -- # wait 92477 00:19:58.888 22:21:31 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@96 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace -f ./tmp-trace/record.trace 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # grep 'trace entries for lcore' ./tmp-trace/record.notice 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # cut -d ' ' -f 2 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # record_num='168699 00:20:11.103 171596 00:20:11.103 168042 00:20:11.103 171241' 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # grep 'Trace Size of lcore' ./tmp-trace/trace.log 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # cut -d ' ' -f 6 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # trace_tool_num='168699 00:20:11.103 171596 00:20:11.103 168042 00:20:11.103 171241' 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@105 -- # delete_tmp_files 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@19 -- # rm -rf ./tmp-trace 00:20:11.103 entries numbers from trace record are: 168699 171596 168042 171241 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@107 -- # echo 'entries numbers from trace record are:' 168699 171596 168042 171241 00:20:11.103 entries numbers from trace tool are: 168699 171596 168042 171241 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@108 -- # echo 'entries numbers from trace tool are:' 168699 171596 168042 171241 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@110 -- # arr_record_num=($record_num) 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@111 -- # arr_trace_tool_num=($trace_tool_num) 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@112 -- # len_arr_record_num=4 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@113 -- # len_arr_trace_tool_num=4 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@116 -- # '[' 4 -ne 4 ']' 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # seq 0 3 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 168699 -le 4096 ']' 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 168699 -ne 168699 ']' 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 171596 -le 4096 ']' 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 171596 -ne 171596 ']' 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 168042 -le 4096 ']' 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 168042 -ne 168042 ']' 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 171241 -le 4096 ']' 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 171241 -ne 171241 ']' 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@135 -- # trap - SIGINT SIGTERM EXIT 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@136 -- # iscsitestfini 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:20:11.103 00:20:11.103 real 0m19.215s 00:20:11.103 user 0m40.903s 00:20:11.103 sys 0m4.169s 00:20:11.103 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:11.103 ************************************ 00:20:11.104 22:21:43 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:20:11.104 END TEST iscsi_tgt_trace_record 00:20:11.104 ************************************ 00:20:11.104 22:21:43 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@41 -- # run_test iscsi_tgt_login_redirection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:20:11.104 22:21:43 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:11.104 22:21:43 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:11.104 22:21:43 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:20:11.367 ************************************ 00:20:11.367 START TEST iscsi_tgt_login_redirection 00:20:11.367 ************************************ 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:20:11.367 * Looking for test storage... 00:20:11.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@12 -- # iscsitestinit 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@14 -- # NULL_BDEV_SIZE=64 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@17 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@20 -- # rpc_addr1=/var/tmp/spdk0.sock 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@21 -- # rpc_addr2=/var/tmp/spdk1.sock 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@25 -- # timing_enter start_iscsi_tgts 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@28 -- # pid1=93315 00:20:11.367 Process pid: 93315 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@29 -- # echo 'Process pid: 93315' 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@27 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -i 0 -m 0x1 --wait-for-rpc 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@32 -- # pid2=93316 00:20:11.367 Process pid: 93316 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@33 -- # echo 'Process pid: 93316' 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@35 -- # trap 'killprocess $pid1; killprocess $pid2; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -i 1 -m 0x2 --wait-for-rpc 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@37 -- # waitforlisten 93315 /var/tmp/spdk0.sock 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@829 -- # '[' -z 93315 ']' 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk0.sock 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.367 22:21:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:20:11.367 [2024-07-23 22:21:43.453595] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:20:11.367 [2024-07-23 22:21:43.453678] [ DPDK EAL parameters: iscsi -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:11.367 [2024-07-23 22:21:43.464476] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:20:11.367 [2024-07-23 22:21:43.464569] [ DPDK EAL parameters: iscsi -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.628 [2024-07-23 22:21:43.576680] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:11.628 [2024-07-23 22:21:43.591599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.628 [2024-07-23 22:21:43.592342] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:11.628 [2024-07-23 22:21:43.613387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.628 [2024-07-23 22:21:43.655871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.628 [2024-07-23 22:21:43.670388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.564 22:21:44 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.564 22:21:44 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@862 -- # return 0 00:20:12.564 22:21:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_set_options -w 0 -o 30 -a 16 00:20:12.564 22:21:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock framework_start_init 00:20:12.823 [2024-07-23 22:21:44.904050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:13.083 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@40 -- # echo 'iscsi_tgt_1 is listening.' 00:20:13.083 iscsi_tgt_1 is listening. 00:20:13.083 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@42 -- # waitforlisten 93316 /var/tmp/spdk1.sock 00:20:13.083 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@829 -- # '[' -z 93316 ']' 00:20:13.083 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk1.sock 00:20:13.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:20:13.083 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.083 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:20:13.083 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.083 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:20:13.342 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.342 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@862 -- # return 0 00:20:13.342 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_set_options -w 0 -o 30 -a 16 00:20:13.342 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock framework_start_init 00:20:13.601 [2024-07-23 22:21:45.695634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:13.860 iscsi_tgt_2 is listening. 00:20:13.860 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@45 -- # echo 'iscsi_tgt_2 is listening.' 00:20:13.860 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@47 -- # timing_exit start_iscsi_tgts 00:20:13.860 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.860 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:20:13.860 22:21:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:20:14.119 22:21:46 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 10.0.0.1:3260 00:20:14.377 22:21:46 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock bdev_null_create Null0 64 512 00:20:14.377 Null0 00:20:14.377 22:21:46 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:20:14.636 22:21:46 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:20:14.895 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 10.0.0.3:3260 -p 00:20:15.154 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock bdev_null_create Null0 64 512 00:20:15.413 Null0 00:20:15.413 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:20:15.413 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@67 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:20:15.413 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:20:15.413 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@68 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:20:15.413 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:20:15.413 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:20:15.413 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@69 -- # waitforiscsidevices 1 00:20:15.413 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@116 -- # local num=1 00:20:15.413 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:15.413 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:15.413 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:15.413 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:15.672 [2024-07-23 22:21:47.610303] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:15.672 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # n=1 00:20:15.672 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:20:15.672 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@123 -- # return 0 00:20:15.672 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@72 -- # fiopid=93411 00:20:15.672 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t randrw -r 15 00:20:15.672 FIO pid: 93411 00:20:15.672 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@73 -- # echo 'FIO pid: 93411' 00:20:15.672 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@75 -- # trap 'iscsicleanup; killprocess $pid1; killprocess $pid2; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.672 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:20:15.673 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # jq length 00:20:15.673 [global] 00:20:15.673 thread=1 00:20:15.673 invalidate=1 00:20:15.673 rw=randrw 00:20:15.673 time_based=1 00:20:15.673 runtime=15 00:20:15.673 ioengine=libaio 00:20:15.673 direct=1 00:20:15.673 bs=512 00:20:15.673 iodepth=1 00:20:15.673 norandommap=1 00:20:15.673 numjobs=1 00:20:15.673 00:20:15.673 [job0] 00:20:15.673 filename=/dev/sda 00:20:15.673 queue_depth set to 113 (sda) 00:20:15.673 job0: (g=0): rw=randrw, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:20:15.673 fio-3.35 00:20:15.673 Starting 1 thread 00:20:15.673 [2024-07-23 22:21:47.777539] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:15.932 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # '[' 1 = 1 ']' 00:20:15.932 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # jq length 00:20:15.932 22:21:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:20:16.191 22:21:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # '[' 0 = 0 ']' 00:20:16.191 22:21:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 -a 10.0.0.3 -p 3260 00:20:16.449 22:21:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:20:16.449 22:21:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@85 -- # sleep 5 00:20:21.767 22:21:53 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:20:21.767 22:21:53 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # jq length 00:20:21.767 22:21:53 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # '[' 0 = 0 ']' 00:20:21.767 22:21:53 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:20:21.767 22:21:53 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # jq length 00:20:22.026 22:21:54 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # '[' 1 = 1 ']' 00:20:22.026 22:21:54 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 00:20:22.285 22:21:54 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:20:22.543 22:21:54 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@93 -- # sleep 5 00:20:27.820 22:21:59 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:20:27.820 22:21:59 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # jq length 00:20:27.820 22:21:59 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # '[' 1 = 1 ']' 00:20:27.820 22:21:59 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:20:27.820 22:21:59 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # jq length 00:20:27.820 22:21:59 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # '[' 0 = 0 ']' 00:20:27.820 22:21:59 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@98 -- # wait 93411 00:20:31.110 [2024-07-23 22:22:02.885344] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:31.110 00:20:31.110 job0: (groupid=0, jobs=1): err= 0: pid=93439: Tue Jul 23 22:22:02 2024 00:20:31.110 read: IOPS=6988, BW=3494KiB/s (3578kB/s)(51.2MiB/15001msec) 00:20:31.110 slat (nsec): min=3075, max=69945, avg=5594.05, stdev=1487.83 00:20:31.110 clat (usec): min=32, max=2008.1k, avg=63.19, stdev=6201.84 00:20:31.110 lat (usec): min=41, max=2008.1k, avg=68.78, stdev=6201.90 00:20:31.110 clat percentiles (usec): 00:20:31.110 | 1.00th=[ 41], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 42], 00:20:31.110 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:20:31.110 | 70.00th=[ 44], 80.00th=[ 46], 90.00th=[ 50], 95.00th=[ 52], 00:20:31.110 | 99.00th=[ 62], 99.50th=[ 68], 99.90th=[ 109], 99.95th=[ 198], 00:20:31.110 | 99.99th=[ 482] 00:20:31.110 bw ( KiB/s): min= 1431, max= 5023, per=100.00%, avg=4347.30, stdev=1048.55, samples=23 00:20:31.110 iops : min= 2862, max=10046, avg=8694.61, stdev=2097.11, samples=23 00:20:31.110 write: IOPS=6976, BW=3488KiB/s (3572kB/s)(51.1MiB/15001msec); 0 zone resets 00:20:31.110 slat (nsec): min=3336, max=62407, avg=5466.66, stdev=1467.49 00:20:31.110 clat (usec): min=23, max=2005.4k, avg=68.02, stdev=6198.80 00:20:31.110 lat (usec): min=44, max=2005.4k, avg=73.49, stdev=6198.84 00:20:31.110 clat percentiles (usec): 00:20:31.110 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 45], 00:20:31.110 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 49], 60.00th=[ 49], 00:20:31.110 | 70.00th=[ 50], 80.00th=[ 51], 90.00th=[ 55], 95.00th=[ 58], 00:20:31.110 | 99.00th=[ 68], 99.50th=[ 75], 99.90th=[ 119], 99.95th=[ 192], 00:20:31.110 | 99.99th=[ 429] 00:20:31.110 bw ( KiB/s): min= 1391, max= 5023, per=100.00%, avg=4341.61, stdev=1060.38, samples=23 00:20:31.110 iops : min= 2782, max=10046, avg=8683.22, stdev=2120.76, samples=23 00:20:31.110 lat (usec) : 50=84.37%, 100=15.51%, 250=0.10%, 500=0.02%, 750=0.01% 00:20:31.110 lat (usec) : 1000=0.01% 00:20:31.110 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:20:31.110 cpu : usr=2.69%, sys=9.11%, ctx=209499, majf=0, minf=1 00:20:31.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.110 issued rwts: total=104832,104655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:31.110 00:20:31.110 Run status group 0 (all jobs): 00:20:31.110 READ: bw=3494KiB/s (3578kB/s), 3494KiB/s-3494KiB/s (3578kB/s-3578kB/s), io=51.2MiB (53.7MB), run=15001-15001msec 00:20:31.110 WRITE: bw=3488KiB/s (3572kB/s), 3488KiB/s-3488KiB/s (3572kB/s-3572kB/s), io=51.1MiB (53.6MB), run=15001-15001msec 00:20:31.110 00:20:31.110 Disk stats (read/write): 00:20:31.110 sda: ios=103784/103575, merge=0/0, ticks=6681/7161, in_queue=13842, util=99.41% 00:20:31.110 Cleaning up iSCSI connection 00:20:31.110 22:22:02 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@100 -- # trap - SIGINT SIGTERM EXIT 00:20:31.110 22:22:02 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@102 -- # iscsicleanup 00:20:31.110 22:22:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:20:31.110 22:22:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:20:31.110 Logging out of session [sid: 38, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:20:31.111 Logout of [sid: 38, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:20:31.111 22:22:02 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:20:31.111 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@983 -- # rm -rf 00:20:31.111 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@103 -- # killprocess 93315 00:20:31.111 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@948 -- # '[' -z 93315 ']' 00:20:31.111 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@952 -- # kill -0 93315 00:20:31.111 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # uname 00:20:31.111 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.111 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93315 00:20:31.111 killing process with pid 93315 00:20:31.111 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:31.111 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:31.111 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93315' 00:20:31.111 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@967 -- # kill 93315 00:20:31.111 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@972 -- # wait 93315 00:20:31.370 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@104 -- # killprocess 93316 00:20:31.370 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@948 -- # '[' -z 93316 ']' 00:20:31.371 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@952 -- # kill -0 93316 00:20:31.371 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # uname 00:20:31.371 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.371 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93316 00:20:31.371 killing process with pid 93316 00:20:31.371 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:31.371 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:31.371 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93316' 00:20:31.371 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@967 -- # kill 93316 00:20:31.371 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@972 -- # wait 93316 00:20:31.629 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@105 -- # iscsitestfini 00:20:31.629 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:20:31.629 00:20:31.629 real 0m20.386s 00:20:31.629 user 0m40.020s 00:20:31.629 sys 0m6.105s 00:20:31.629 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:31.629 22:22:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:20:31.629 ************************************ 00:20:31.629 END TEST iscsi_tgt_login_redirection 00:20:31.629 ************************************ 00:20:31.629 22:22:03 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@42 -- # run_test iscsi_tgt_digests /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:20:31.629 22:22:03 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:31.629 22:22:03 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:31.629 22:22:03 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:20:31.629 ************************************ 00:20:31.629 START TEST iscsi_tgt_digests 00:20:31.629 ************************************ 00:20:31.629 22:22:03 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:20:31.629 * Looking for test storage... 00:20:31.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@11 -- # iscsitestinit 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@49 -- # MALLOC_BDEV_SIZE=64 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@52 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@54 -- # timing_enter start_iscsi_tgt 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:31.891 Process pid: 93699 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@57 -- # pid=93699 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@58 -- # echo 'Process pid: 93699' 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@60 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@56 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@62 -- # waitforlisten 93699 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@829 -- # '[' -z 93699 ']' 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.891 22:22:03 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:31.891 [2024-07-23 22:22:03.892159] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:20:31.891 [2024-07-23 22:22:03.892241] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93699 ] 00:20:31.891 [2024-07-23 22:22:04.011389] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:31.891 [2024-07-23 22:22:04.028834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:31.891 [2024-07-23 22:22:04.080384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.891 [2024-07-23 22:22:04.080513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.891 [2024-07-23 22:22:04.080695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:31.891 [2024-07-23 22:22:04.080810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.830 22:22:04 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:32.830 22:22:04 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@862 -- # return 0 00:20:32.830 22:22:04 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@63 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:20:32.830 22:22:04 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.830 22:22:04 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:32.830 22:22:04 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.830 22:22:04 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@64 -- # rpc_cmd framework_start_init 00:20:32.830 22:22:04 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.830 22:22:04 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:32.830 [2024-07-23 22:22:04.840333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:32.830 22:22:04 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.830 iscsi_tgt is listening. Running tests... 00:20:32.830 22:22:04 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@65 -- # echo 'iscsi_tgt is listening. Running tests...' 00:20:32.830 22:22:04 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@67 -- # timing_exit start_iscsi_tgt 00:20:32.830 22:22:04 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:32.830 22:22:04 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:32.830 22:22:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@69 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:20:32.830 22:22:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.830 22:22:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:32.830 22:22:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.830 22:22:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@70 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:20:32.830 22:22:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.830 22:22:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:33.089 22:22:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.089 22:22:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@71 -- # rpc_cmd bdev_malloc_create 64 512 00:20:33.089 22:22:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.089 22:22:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:33.089 Malloc0 00:20:33.089 22:22:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.089 22:22:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@76 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:20:33.089 22:22:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.089 22:22:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:33.089 22:22:05 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.089 22:22:05 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@77 -- # sleep 1 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@79 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:20:34.028 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.DataDigest' -v None 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # true 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # DataDigestAbility='iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:20:34.028 iscsiadm: Could not execute operation on all records: invalid parameter' 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@84 -- # '[' 'iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:20:34.028 iscsiadm: Could not execute operation on all records: invalid parameterx' '!=' x ']' 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@85 -- # run_test iscsi_tgt_digest iscsi_header_digest_test 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:34.028 ************************************ 00:20:34.028 START TEST iscsi_tgt_digest 00:20:34.028 ************************************ 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1123 -- # iscsi_header_digest_test 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@27 -- # node_login_fio_logout 'HeaderDigest -v CRC32C' 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:20:34.028 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:20:34.028 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:34.028 [2024-07-23 22:22:06.146678] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:20:34.028 22:22:06 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:20:34.028 [global] 00:20:34.028 thread=1 00:20:34.028 invalidate=1 00:20:34.028 rw=write 00:20:34.028 time_based=1 00:20:34.028 runtime=2 00:20:34.028 ioengine=libaio 00:20:34.028 direct=1 00:20:34.028 bs=512 00:20:34.028 iodepth=1 00:20:34.028 norandommap=1 00:20:34.028 numjobs=1 00:20:34.028 00:20:34.028 [job0] 00:20:34.028 filename=/dev/sda 00:20:34.028 queue_depth set to 113 (sda) 00:20:34.287 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:20:34.287 fio-3.35 00:20:34.287 Starting 1 thread 00:20:34.287 [2024-07-23 22:22:06.332139] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:36.821 [2024-07-23 22:22:08.444257] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:36.821 00:20:36.821 job0: (groupid=0, jobs=1): err= 0: pid=93791: Tue Jul 23 22:22:08 2024 00:20:36.821 write: IOPS=13.2k, BW=6615KiB/s (6773kB/s)(12.9MiB/2001msec); 0 zone resets 00:20:36.821 slat (nsec): min=3878, max=74507, avg=5952.05, stdev=1485.66 00:20:36.821 clat (usec): min=35, max=3015, avg=69.14, stdev=29.84 00:20:36.821 lat (usec): min=58, max=3027, avg=75.09, stdev=30.07 00:20:36.821 clat percentiles (usec): 00:20:36.821 | 1.00th=[ 59], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 65], 00:20:36.821 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:20:36.821 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 76], 95.00th=[ 79], 00:20:36.821 | 99.00th=[ 92], 99.50th=[ 101], 99.90th=[ 178], 99.95th=[ 239], 00:20:36.821 | 99.99th=[ 2409] 00:20:36.821 bw ( KiB/s): min= 6320, max= 6775, per=99.69%, avg=6594.00, stdev=241.34, samples=3 00:20:36.821 iops : min=12640, max=13550, avg=13188.00, stdev=482.67, samples=3 00:20:36.821 lat (usec) : 50=0.01%, 100=99.47%, 250=0.47%, 500=0.02%, 750=0.01% 00:20:36.821 lat (usec) : 1000=0.01% 00:20:36.821 lat (msec) : 4=0.01% 00:20:36.821 cpu : usr=3.65%, sys=11.40%, ctx=26475, majf=0, minf=1 00:20:36.821 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.821 issued rwts: total=0,26472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.821 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:36.821 00:20:36.821 Run status group 0 (all jobs): 00:20:36.821 WRITE: bw=6615KiB/s (6773kB/s), 6615KiB/s-6615KiB/s (6773kB/s-6773kB/s), io=12.9MiB (13.6MB), run=2001-2001msec 00:20:36.821 00:20:36.821 Disk stats (read/write): 00:20:36.821 sda: ios=48/24924, merge=0/0, ticks=8/1687, in_queue=1696, util=95.37% 00:20:36.821 22:22:08 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:20:36.821 [global] 00:20:36.821 thread=1 00:20:36.821 invalidate=1 00:20:36.821 rw=read 00:20:36.821 time_based=1 00:20:36.821 runtime=2 00:20:36.821 ioengine=libaio 00:20:36.821 direct=1 00:20:36.821 bs=512 00:20:36.821 iodepth=1 00:20:36.821 norandommap=1 00:20:36.821 numjobs=1 00:20:36.821 00:20:36.821 [job0] 00:20:36.821 filename=/dev/sda 00:20:36.821 queue_depth set to 113 (sda) 00:20:36.821 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:20:36.821 fio-3.35 00:20:36.821 Starting 1 thread 00:20:38.756 00:20:38.756 job0: (groupid=0, jobs=1): err= 0: pid=93850: Tue Jul 23 22:22:10 2024 00:20:38.756 read: IOPS=16.2k, BW=8091KiB/s (8285kB/s)(15.8MiB/2001msec) 00:20:38.756 slat (nsec): min=3785, max=54621, avg=4490.00, stdev=1560.94 00:20:38.756 clat (usec): min=39, max=991, avg=56.72, stdev= 9.22 00:20:38.756 lat (usec): min=48, max=1005, avg=61.21, stdev= 9.92 00:20:38.756 clat percentiles (usec): 00:20:38.756 | 1.00th=[ 48], 5.00th=[ 49], 10.00th=[ 49], 20.00th=[ 51], 00:20:38.756 | 30.00th=[ 52], 40.00th=[ 54], 50.00th=[ 56], 60.00th=[ 58], 00:20:38.756 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 65], 95.00th=[ 70], 00:20:38.756 | 99.00th=[ 86], 99.50th=[ 91], 99.90th=[ 104], 99.95th=[ 115], 00:20:38.756 | 99.99th=[ 202] 00:20:38.756 bw ( KiB/s): min= 7716, max= 8228, per=99.37%, avg=8040.67, stdev=282.28, samples=3 00:20:38.756 iops : min=15432, max=16458, avg=16082.00, stdev=565.22, samples=3 00:20:38.756 lat (usec) : 50=16.65%, 100=83.22%, 250=0.12%, 500=0.01%, 1000=0.01% 00:20:38.756 cpu : usr=5.40%, sys=11.20%, ctx=32408, majf=0, minf=1 00:20:38.756 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:38.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:38.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:38.756 issued rwts: total=32379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:38.756 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:38.756 00:20:38.756 Run status group 0 (all jobs): 00:20:38.756 READ: bw=8091KiB/s (8285kB/s), 8091KiB/s-8091KiB/s (8285kB/s-8285kB/s), io=15.8MiB (16.6MB), run=2001-2001msec 00:20:38.756 00:20:38.756 Disk stats (read/write): 00:20:38.756 sda: ios=30502/0, merge=0/0, ticks=1632/0, in_queue=1632, util=95.07% 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:20:38.756 Logging out of session [sid: 39, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:20:38.756 Logout of [sid: 39, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:38.756 iscsiadm: No active sessions. 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@31 -- # node_login_fio_logout 'HeaderDigest -v CRC32C,None' 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C,None 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:20:38.756 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:20:38.756 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:38.756 [2024-07-23 22:22:10.918790] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:20:38.756 22:22:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:20:38.756 [global] 00:20:38.756 thread=1 00:20:38.756 invalidate=1 00:20:38.756 rw=write 00:20:38.756 time_based=1 00:20:38.756 runtime=2 00:20:38.756 ioengine=libaio 00:20:38.756 direct=1 00:20:38.756 bs=512 00:20:38.756 iodepth=1 00:20:38.756 norandommap=1 00:20:38.756 numjobs=1 00:20:38.756 00:20:39.041 [job0] 00:20:39.041 filename=/dev/sda 00:20:39.041 queue_depth set to 113 (sda) 00:20:39.041 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:20:39.041 fio-3.35 00:20:39.041 Starting 1 thread 00:20:39.041 [2024-07-23 22:22:11.106611] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:41.585 [2024-07-23 22:22:13.220415] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:41.585 00:20:41.585 job0: (groupid=0, jobs=1): err= 0: pid=93916: Tue Jul 23 22:22:13 2024 00:20:41.585 write: IOPS=13.5k, BW=6729KiB/s (6891kB/s)(13.1MiB/2001msec); 0 zone resets 00:20:41.585 slat (nsec): min=4752, max=76166, avg=5968.85, stdev=1409.50 00:20:41.585 clat (usec): min=50, max=2545, avg=67.82, stdev=20.81 00:20:41.585 lat (usec): min=55, max=2551, avg=73.78, stdev=21.00 00:20:41.585 clat percentiles (usec): 00:20:41.585 | 1.00th=[ 57], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 64], 00:20:41.585 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:20:41.585 | 70.00th=[ 70], 80.00th=[ 71], 90.00th=[ 74], 95.00th=[ 78], 00:20:41.585 | 99.00th=[ 88], 99.50th=[ 92], 99.90th=[ 109], 99.95th=[ 126], 00:20:41.585 | 99.99th=[ 523] 00:20:41.585 bw ( KiB/s): min= 6519, max= 6921, per=100.00%, avg=6741.33, stdev=204.37, samples=3 00:20:41.585 iops : min=13038, max=13842, avg=13482.67, stdev=408.74, samples=3 00:20:41.585 lat (usec) : 100=99.79%, 250=0.17%, 500=0.03%, 750=0.01% 00:20:41.585 lat (msec) : 4=0.01% 00:20:41.585 cpu : usr=2.80%, sys=12.15%, ctx=26940, majf=0, minf=1 00:20:41.585 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.585 issued rwts: total=0,26931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.585 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:41.585 00:20:41.585 Run status group 0 (all jobs): 00:20:41.585 WRITE: bw=6729KiB/s (6891kB/s), 6729KiB/s-6729KiB/s (6891kB/s-6891kB/s), io=13.1MiB (13.8MB), run=2001-2001msec 00:20:41.585 00:20:41.585 Disk stats (read/write): 00:20:41.585 sda: ios=48/25381, merge=0/0, ticks=5/1697, in_queue=1703, util=95.36% 00:20:41.585 22:22:13 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:20:41.585 [global] 00:20:41.585 thread=1 00:20:41.585 invalidate=1 00:20:41.585 rw=read 00:20:41.585 time_based=1 00:20:41.585 runtime=2 00:20:41.585 ioengine=libaio 00:20:41.586 direct=1 00:20:41.586 bs=512 00:20:41.586 iodepth=1 00:20:41.586 norandommap=1 00:20:41.586 numjobs=1 00:20:41.586 00:20:41.586 [job0] 00:20:41.586 filename=/dev/sda 00:20:41.586 queue_depth set to 113 (sda) 00:20:41.586 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:20:41.586 fio-3.35 00:20:41.586 Starting 1 thread 00:20:43.493 00:20:43.493 job0: (groupid=0, jobs=1): err= 0: pid=93969: Tue Jul 23 22:22:15 2024 00:20:43.493 read: IOPS=16.2k, BW=8087KiB/s (8281kB/s)(15.8MiB/2001msec) 00:20:43.493 slat (nsec): min=3565, max=70078, avg=5883.49, stdev=1564.07 00:20:43.493 clat (usec): min=24, max=1861, avg=55.48, stdev=13.91 00:20:43.493 lat (usec): min=47, max=1869, avg=61.37, stdev=14.26 00:20:43.493 clat percentiles (usec): 00:20:43.493 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 50], 00:20:43.493 | 30.00th=[ 52], 40.00th=[ 53], 50.00th=[ 54], 60.00th=[ 57], 00:20:43.493 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 63], 95.00th=[ 67], 00:20:43.493 | 99.00th=[ 80], 99.50th=[ 85], 99.90th=[ 113], 99.95th=[ 178], 00:20:43.493 | 99.99th=[ 465] 00:20:43.493 bw ( KiB/s): min= 7710, max= 8276, per=99.18%, avg=8020.67, stdev=287.03, samples=3 00:20:43.493 iops : min=15420, max=16552, avg=16041.33, stdev=574.06, samples=3 00:20:43.493 lat (usec) : 50=19.77%, 100=80.06%, 250=0.14%, 500=0.01%, 750=0.01% 00:20:43.493 lat (usec) : 1000=0.01% 00:20:43.493 lat (msec) : 2=0.01% 00:20:43.493 cpu : usr=3.45%, sys=15.10%, ctx=32366, majf=0, minf=1 00:20:43.493 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:43.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.493 issued rwts: total=32363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.493 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:43.493 00:20:43.493 Run status group 0 (all jobs): 00:20:43.493 READ: bw=8087KiB/s (8281kB/s), 8087KiB/s-8087KiB/s (8281kB/s-8281kB/s), io=15.8MiB (16.6MB), run=2001-2001msec 00:20:43.493 00:20:43.493 Disk stats (read/write): 00:20:43.493 sda: ios=30545/0, merge=0/0, ticks=1648/0, in_queue=1647, util=95.07% 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:20:43.493 Logging out of session [sid: 40, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:20:43.493 Logout of [sid: 40, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:43.493 iscsiadm: No active sessions. 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:20:43.493 00:20:43.493 real 0m9.533s 00:20:43.493 user 0m0.845s 00:20:43.493 sys 0m1.325s 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@10 -- # set +x 00:20:43.493 ************************************ 00:20:43.493 END TEST iscsi_tgt_digest 00:20:43.493 ************************************ 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@92 -- # iscsicleanup 00:20:43.493 Cleaning up iSCSI connection 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:20:43.493 iscsiadm: No matching sessions found 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@981 -- # true 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@983 -- # rm -rf 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@93 -- # killprocess 93699 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@948 -- # '[' -z 93699 ']' 00:20:43.493 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@952 -- # kill -0 93699 00:20:43.752 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@953 -- # uname 00:20:43.752 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.752 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93699 00:20:43.752 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:43.752 killing process with pid 93699 00:20:43.752 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:43.752 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93699' 00:20:43.752 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@967 -- # kill 93699 00:20:43.752 22:22:15 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@972 -- # wait 93699 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@94 -- # iscsitestfini 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:20:44.012 00:20:44.012 real 0m12.302s 00:20:44.012 user 0m45.472s 00:20:44.012 sys 0m3.789s 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:44.012 ************************************ 00:20:44.012 END TEST iscsi_tgt_digests 00:20:44.012 ************************************ 00:20:44.012 22:22:16 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@43 -- # run_test iscsi_tgt_fuzz /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:20:44.012 22:22:16 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:44.012 22:22:16 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:44.012 22:22:16 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:20:44.012 ************************************ 00:20:44.012 START TEST iscsi_tgt_fuzz 00:20:44.012 ************************************ 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:20:44.012 * Looking for test storage... 00:20:44.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/fuzz 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@11 -- # iscsitestinit 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@13 -- # '[' -z 10.0.0.1 ']' 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@18 -- # '[' -z 10.0.0.2 ']' 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@23 -- # timing_enter iscsi_fuzz_test 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@25 -- # MALLOC_BDEV_SIZE=64 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@26 -- # MALLOC_BLOCK_SIZE=4096 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@28 -- # TEST_TIMEOUT=1200 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@31 -- # for i in "$@" 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@32 -- # case "$i" in 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@34 -- # TEST_TIMEOUT=30 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@39 -- # timing_enter start_iscsi_tgt 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.012 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:44.272 Process iscsipid: 94069 00:20:44.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.272 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@42 -- # iscsipid=94069 00:20:44.272 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@43 -- # echo 'Process iscsipid: 94069' 00:20:44.272 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@45 -- # trap 'killprocess $iscsipid; exit 1' SIGINT SIGTERM EXIT 00:20:44.272 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --disable-cpumask-locks --wait-for-rpc 00:20:44.272 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@47 -- # waitforlisten 94069 00:20:44.272 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@829 -- # '[' -z 94069 ']' 00:20:44.272 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.272 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.272 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.272 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.272 22:22:16 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@862 -- # return 0 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@49 -- # rpc_cmd iscsi_set_options -o 60 -a 16 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@50 -- # rpc_cmd framework_start_init 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:45.210 iscsi_tgt is listening. Running tests... 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@51 -- # echo 'iscsi_tgt is listening. Running tests...' 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@52 -- # timing_exit start_iscsi_tgt 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@54 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@55 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@56 -- # rpc_cmd bdev_malloc_create 64 4096 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.210 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:45.470 Malloc0 00:20:45.470 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.470 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@57 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:20:45.470 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.470 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:45.470 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.470 22:22:17 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@58 -- # sleep 1 00:20:46.407 22:22:18 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@60 -- # trap 'killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:46.407 22:22:18 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/iscsi_fuzz/iscsi_fuzz -m 0xF0 -T 10.0.0.1 -t 30 00:21:18.551 Fuzzing completed. Shutting down the fuzz application. 00:21:18.551 00:21:18.551 device 0x22d3980 stats: Sent 12741 valid opcode PDUs, 116495 invalid opcode PDUs. 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@64 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:disk1 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@67 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@71 -- # killprocess 94069 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@948 -- # '[' -z 94069 ']' 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@952 -- # kill -0 94069 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@953 -- # uname 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94069 00:21:18.551 killing process with pid 94069 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94069' 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@967 -- # kill 94069 00:21:18.551 22:22:48 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@972 -- # wait 94069 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@73 -- # iscsitestfini 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@75 -- # timing_exit iscsi_fuzz_test 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:18.551 ************************************ 00:21:18.551 END TEST iscsi_tgt_fuzz 00:21:18.551 ************************************ 00:21:18.551 00:21:18.551 real 0m33.104s 00:21:18.551 user 3m9.324s 00:21:18.551 sys 0m16.464s 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:18.551 22:22:49 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@44 -- # run_test iscsi_tgt_multiconnection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:21:18.551 22:22:49 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:18.551 22:22:49 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:18.551 22:22:49 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:21:18.551 ************************************ 00:21:18.551 START TEST iscsi_tgt_multiconnection 00:21:18.551 ************************************ 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:21:18.551 * Looking for test storage... 00:21:18.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@11 -- # iscsitestinit 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@18 -- # CONNECTION_NUMBER=30 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@40 -- # timing_enter start_iscsi_tgt 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@42 -- # iscsipid=94497 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@43 -- # echo 'iSCSI target launched. pid: 94497' 00:21:18.551 iSCSI target launched. pid: 94497 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@44 -- # trap 'remove_backends; iscsicleanup; killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@46 -- # waitforlisten 94497 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 94497 ']' 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.551 22:22:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:18.551 [2024-07-23 22:22:49.409169] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:21:18.551 [2024-07-23 22:22:49.410032] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94497 ] 00:21:18.551 [2024-07-23 22:22:49.528280] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:18.551 [2024-07-23 22:22:49.545645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.551 [2024-07-23 22:22:49.594512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.551 22:22:50 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.551 22:22:50 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:21:18.551 22:22:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 128 00:21:18.551 22:22:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:18.551 [2024-07-23 22:22:50.608142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:18.811 22:22:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:18.811 22:22:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:21:19.070 22:22:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@50 -- # timing_exit start_iscsi_tgt 00:21:19.070 22:22:51 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:19.070 22:22:51 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:19.070 22:22:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:21:19.329 22:22:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:21:19.589 Creating an iSCSI target node. 00:21:19.589 22:22:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@55 -- # echo 'Creating an iSCSI target node.' 00:21:19.589 22:22:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs0 -c 1048576 00:21:19.848 22:22:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # ls_guid=6e6ce82b-c73f-40d8-af75-b4e67cff158a 00:21:19.848 22:22:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@59 -- # get_lvs_free_mb 6e6ce82b-c73f-40d8-af75-b4e67cff158a 00:21:19.848 22:22:51 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1362 -- # local lvs_uuid=6e6ce82b-c73f-40d8-af75-b4e67cff158a 00:21:19.849 22:22:51 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1363 -- # local lvs_info 00:21:19.849 22:22:51 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1364 -- # local fc 00:21:19.849 22:22:51 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1365 -- # local cs 00:21:19.849 22:22:51 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1366 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:19.849 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:21:19.849 { 00:21:19.849 "uuid": "6e6ce82b-c73f-40d8-af75-b4e67cff158a", 00:21:19.849 "name": "lvs0", 00:21:19.849 "base_bdev": "Nvme0n1", 00:21:19.849 "total_data_clusters": 5099, 00:21:19.849 "free_clusters": 5099, 00:21:19.849 "block_size": 4096, 00:21:19.849 "cluster_size": 1048576 00:21:19.849 } 00:21:19.849 ]' 00:21:19.849 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="6e6ce82b-c73f-40d8-af75-b4e67cff158a") .free_clusters' 00:21:20.108 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1367 -- # fc=5099 00:21:20.108 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="6e6ce82b-c73f-40d8-af75-b4e67cff158a") .cluster_size' 00:21:20.108 5099 00:21:20.108 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # cs=1048576 00:21:20.108 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1371 -- # free_mb=5099 00:21:20.108 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1372 -- # echo 5099 00:21:20.108 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@60 -- # lvol_bdev_size=169 00:21:20.108 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # seq 1 30 00:21:20.108 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:20.108 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_1 169 00:21:20.367 d57b26f6-a1c1-4a9c-bf41-28a27b6e25a6 00:21:20.367 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:20.367 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_2 169 00:21:20.625 45b35dd4-5ac9-4c16-9e06-91b7d5cf3ec8 00:21:20.625 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:20.625 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_3 169 00:21:20.625 68cef7a7-760d-4ed2-88a0-d44d83f983d7 00:21:20.625 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:20.625 22:22:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_4 169 00:21:20.884 0635f80d-5e30-4420-a80e-17312f7fcaf6 00:21:20.884 22:22:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:20.884 22:22:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_5 169 00:21:21.142 b13efc0c-b74a-4b0a-bb00-1351522f3132 00:21:21.142 22:22:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:21.142 22:22:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_6 169 00:21:21.400 eb147db9-cff3-4fac-a0cf-059ec2d6b283 00:21:21.400 22:22:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:21.400 22:22:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_7 169 00:21:21.659 9fd7de83-8bfc-4286-8cec-aef5164ad645 00:21:21.659 22:22:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:21.659 22:22:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_8 169 00:21:21.659 40ff9e9b-6e3a-4f92-8198-84da550176e6 00:21:21.659 22:22:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:21.659 22:22:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_9 169 00:21:21.918 e1277939-b712-424f-9398-1de3edc17199 00:21:21.918 22:22:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:21.918 22:22:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_10 169 00:21:22.177 79a3737e-b2b8-4feb-975f-7e6a3a3d1576 00:21:22.177 22:22:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:22.177 22:22:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_11 169 00:21:22.177 0524d117-4ed5-4996-9a79-e2b802a835fa 00:21:22.177 22:22:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:22.177 22:22:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_12 169 00:21:22.437 f739f204-d3e1-43e4-a446-01c31da61d54 00:21:22.437 22:22:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:22.437 22:22:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_13 169 00:21:22.696 7e7d3b54-6644-4e7b-a19a-f29d07ef6d55 00:21:22.696 22:22:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:22.696 22:22:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_14 169 00:21:22.696 4f4f30ca-5484-4b7f-9ff2-e277c949f9b9 00:21:22.696 22:22:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:22.696 22:22:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_15 169 00:21:22.955 b71f296f-7e52-4a5d-8414-1080175fd536 00:21:22.955 22:22:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:22.955 22:22:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_16 169 00:21:23.214 6a1b2ed9-c55b-4de9-9d03-eb9ecedfebb7 00:21:23.214 22:22:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:23.214 22:22:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_17 169 00:21:23.472 15207816-d0a8-4c18-acc4-26c5bc56f7e4 00:21:23.472 22:22:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:23.472 22:22:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_18 169 00:21:23.472 e2c909e3-4821-42c0-be7e-0302d8e45ae6 00:21:23.472 22:22:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:23.472 22:22:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_19 169 00:21:23.731 39d4e49b-efb8-4450-8bce-ab20a8cb8d6f 00:21:23.731 22:22:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:23.731 22:22:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_20 169 00:21:23.990 63e6e119-4f34-4528-a0dd-ad74d29c92ad 00:21:23.990 22:22:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:23.990 22:22:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_21 169 00:21:23.990 37028589-e95d-4d9e-ae80-e33681c99c5b 00:21:23.990 22:22:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:23.990 22:22:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_22 169 00:21:24.298 609f1471-9455-4f8b-ac94-efaf87a46881 00:21:24.298 22:22:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:24.298 22:22:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_23 169 00:21:24.558 bbf3773d-4f6a-4fb3-919b-e58d94e8c9a0 00:21:24.558 22:22:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:24.558 22:22:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_24 169 00:21:24.558 4a2dfb51-0cce-42ea-932c-fd5148a1979d 00:21:24.558 22:22:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:24.558 22:22:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_25 169 00:21:24.818 56837505-73fa-4317-a1a1-61f7413bf878 00:21:24.818 22:22:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:24.818 22:22:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_26 169 00:21:25.078 1e81274b-ca9b-4a61-9962-8bce4736552b 00:21:25.078 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:25.078 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_27 169 00:21:25.078 cf227d73-5d7e-4be6-bb06-eb7362e2cf50 00:21:25.078 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:25.078 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_28 169 00:21:25.338 80b4342c-2882-4637-945b-316a1dcd0374 00:21:25.338 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:25.338 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_29 169 00:21:25.598 5caf8b32-c297-41ce-b94a-06324439d900 00:21:25.598 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:25.598 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e6ce82b-c73f-40d8-af75-b4e67cff158a lbd_30 169 00:21:25.598 ec158fc7-79eb-475f-8db3-e324cf39f556 00:21:25.858 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # seq 1 30 00:21:25.858 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:25.858 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_1:0 00:21:25.858 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias lvs0/lbd_1:0 1:2 256 -d 00:21:25.858 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:25.858 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_2:0 00:21:25.858 22:22:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias lvs0/lbd_2:0 1:2 256 -d 00:21:26.117 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:26.117 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_3:0 00:21:26.117 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias lvs0/lbd_3:0 1:2 256 -d 00:21:26.378 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:26.378 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_4:0 00:21:26.378 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias lvs0/lbd_4:0 1:2 256 -d 00:21:26.638 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:26.638 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_5:0 00:21:26.638 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias lvs0/lbd_5:0 1:2 256 -d 00:21:26.638 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:26.638 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_6:0 00:21:26.638 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias lvs0/lbd_6:0 1:2 256 -d 00:21:26.898 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:26.898 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_7:0 00:21:26.898 22:22:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias lvs0/lbd_7:0 1:2 256 -d 00:21:27.158 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:27.158 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_8:0 00:21:27.158 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias lvs0/lbd_8:0 1:2 256 -d 00:21:27.158 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:27.158 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_9:0 00:21:27.158 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias lvs0/lbd_9:0 1:2 256 -d 00:21:27.417 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:27.417 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_10:0 00:21:27.417 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias lvs0/lbd_10:0 1:2 256 -d 00:21:27.677 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:27.677 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_11:0 00:21:27.677 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target11 Target11_alias lvs0/lbd_11:0 1:2 256 -d 00:21:27.677 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:27.677 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_12:0 00:21:27.677 22:22:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target12 Target12_alias lvs0/lbd_12:0 1:2 256 -d 00:21:27.936 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:27.936 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_13:0 00:21:27.936 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target13 Target13_alias lvs0/lbd_13:0 1:2 256 -d 00:21:28.196 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:28.196 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_14:0 00:21:28.196 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target14 Target14_alias lvs0/lbd_14:0 1:2 256 -d 00:21:28.455 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:28.455 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_15:0 00:21:28.455 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target15 Target15_alias lvs0/lbd_15:0 1:2 256 -d 00:21:28.715 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:28.715 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_16:0 00:21:28.715 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target16 Target16_alias lvs0/lbd_16:0 1:2 256 -d 00:21:28.715 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:28.715 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_17:0 00:21:28.715 22:23:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target17 Target17_alias lvs0/lbd_17:0 1:2 256 -d 00:21:28.975 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:28.975 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_18:0 00:21:28.975 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target18 Target18_alias lvs0/lbd_18:0 1:2 256 -d 00:21:29.234 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:29.234 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_19:0 00:21:29.234 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target19 Target19_alias lvs0/lbd_19:0 1:2 256 -d 00:21:29.234 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:29.234 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_20:0 00:21:29.234 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target20 Target20_alias lvs0/lbd_20:0 1:2 256 -d 00:21:29.494 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:29.494 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_21:0 00:21:29.494 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target21 Target21_alias lvs0/lbd_21:0 1:2 256 -d 00:21:29.753 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:29.753 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_22:0 00:21:29.753 22:23:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target22 Target22_alias lvs0/lbd_22:0 1:2 256 -d 00:21:30.013 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:30.013 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_23:0 00:21:30.013 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target23 Target23_alias lvs0/lbd_23:0 1:2 256 -d 00:21:30.013 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:30.013 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_24:0 00:21:30.013 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target24 Target24_alias lvs0/lbd_24:0 1:2 256 -d 00:21:30.272 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:30.272 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_25:0 00:21:30.272 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target25 Target25_alias lvs0/lbd_25:0 1:2 256 -d 00:21:30.531 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:30.531 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_26:0 00:21:30.531 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target26 Target26_alias lvs0/lbd_26:0 1:2 256 -d 00:21:30.790 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:30.790 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_27:0 00:21:30.790 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target27 Target27_alias lvs0/lbd_27:0 1:2 256 -d 00:21:31.049 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:31.049 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_28:0 00:21:31.049 22:23:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target28 Target28_alias lvs0/lbd_28:0 1:2 256 -d 00:21:31.049 22:23:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:31.049 22:23:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_29:0 00:21:31.049 22:23:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target29 Target29_alias lvs0/lbd_29:0 1:2 256 -d 00:21:31.308 22:23:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:31.308 22:23:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_30:0 00:21:31.308 22:23:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target30 Target30_alias lvs0/lbd_30:0 1:2 256 -d 00:21:31.567 22:23:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@69 -- # sleep 1 00:21:32.503 Logging into iSCSI target. 00:21:32.503 22:23:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@71 -- # echo 'Logging into iSCSI target.' 00:21:32.503 22:23:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@72 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target16 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target17 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target18 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target19 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target20 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target21 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target22 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target23 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target24 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target25 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target26 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target27 00:21:32.503 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target28 00:21:32.504 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target29 00:21:32.504 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target30 00:21:32.504 22:23:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@73 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:21:32.504 [2024-07-23 22:23:04.604031] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:32.504 [2024-07-23 22:23:04.620178] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:32.504 [2024-07-23 22:23:04.638617] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:32.504 [2024-07-23 22:23:04.684197] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:32.504 [2024-07-23 22:23:04.686365] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:32.763 [2024-07-23 22:23:04.728165] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:32.763 [2024-07-23 22:23:04.736287] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:32.763 [2024-07-23 22:23:04.805381] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:32.763 [2024-07-23 22:23:04.808389] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:32.763 [2024-07-23 22:23:04.857599] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:32.763 [2024-07-23 22:23:04.869192] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:32.763 [2024-07-23 22:23:04.893807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:32.763 [2024-07-23 22:23:04.904307] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:32.763 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:21:32.763 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:21:32.763 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:21:32.764 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:21:32.764 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:21:32.764 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:21:32.764 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:21:32.764 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:21:32.764 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:21:32.764 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:21:32.764 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:21:32.764 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:21:32.764 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:21:32.764 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:21:32.764 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:21:32.764 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:21:32.764 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:21:32.764 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, por[2024-07-23 22:23:04.946444] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.023 [2024-07-23 22:23:04.970165] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.023 [2024-07-23 22:23:05.017375] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.023 [2024-07-23 22:23:05.040006] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.023 [2024-07-23 22:23:05.064323] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.023 [2024-07-23 22:23:05.091057] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.023 [2024-07-23 22:23:05.112028] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.023 [2024-07-23 22:23:05.151377] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.024 [2024-07-23 22:23:05.169107] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.024 [2024-07-23 22:23:05.206448] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.283 [2024-07-23 22:23:05.228570] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.283 [2024-07-23 22:23:05.268747] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.283 [2024-07-23 22:23:05.288638] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.283 [2024-07-23 22:23:05.329687] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.283 [2024-07-23 22:23:05.353010] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.283 tal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:21:33.283 Login to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:21:33.283 22:23:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@74 -- # waitforiscsidevices 30 00:21:33.283 22:23:05 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@116 -- # local num=30 00:21:33.283 22:23:05 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:33.283 22:23:05 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:33.283 22:23:05 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:33.284 22:23:05 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:33.284 [2024-07-23 22:23:05.396409] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.284 [2024-07-23 22:23:05.396464] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.284 22:23:05 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # n=30 00:21:33.284 22:23:05 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@120 -- # '[' 30 -ne 30 ']' 00:21:33.284 22:23:05 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@123 -- # return 0 00:21:33.284 Running FIO 00:21:33.284 22:23:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@76 -- # echo 'Running FIO' 00:21:33.284 22:23:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 64 -t randrw -r 5 00:21:33.543 [global] 00:21:33.543 thread=1 00:21:33.543 invalidate=1 00:21:33.543 rw=randrw 00:21:33.543 time_based=1 00:21:33.543 runtime=5 00:21:33.543 ioengine=libaio 00:21:33.543 direct=1 00:21:33.543 bs=131072 00:21:33.543 iodepth=64 00:21:33.543 norandommap=1 00:21:33.543 numjobs=1 00:21:33.543 00:21:33.543 [job0] 00:21:33.543 filename=/dev/sda 00:21:33.543 [job1] 00:21:33.543 filename=/dev/sdb 00:21:33.543 [job2] 00:21:33.543 filename=/dev/sdc 00:21:33.543 [job3] 00:21:33.543 filename=/dev/sdd 00:21:33.543 [job4] 00:21:33.543 filename=/dev/sde 00:21:33.543 [job5] 00:21:33.543 filename=/dev/sdg 00:21:33.543 [job6] 00:21:33.543 filename=/dev/sdf 00:21:33.543 [job7] 00:21:33.543 filename=/dev/sdh 00:21:33.543 [job8] 00:21:33.543 filename=/dev/sdi 00:21:33.543 [job9] 00:21:33.543 filename=/dev/sdj 00:21:33.543 [job10] 00:21:33.543 filename=/dev/sdk 00:21:33.543 [job11] 00:21:33.543 filename=/dev/sdl 00:21:33.543 [job12] 00:21:33.543 filename=/dev/sdm 00:21:33.543 [job13] 00:21:33.543 filename=/dev/sdn 00:21:33.543 [job14] 00:21:33.543 filename=/dev/sdo 00:21:33.543 [job15] 00:21:33.543 filename=/dev/sdp 00:21:33.543 [job16] 00:21:33.543 filename=/dev/sdq 00:21:33.543 [job17] 00:21:33.543 filename=/dev/sdr 00:21:33.543 [job18] 00:21:33.543 filename=/dev/sds 00:21:33.543 [job19] 00:21:33.543 filename=/dev/sdt 00:21:33.543 [job20] 00:21:33.543 filename=/dev/sdu 00:21:33.543 [job21] 00:21:33.543 filename=/dev/sdv 00:21:33.543 [job22] 00:21:33.543 filename=/dev/sdw 00:21:33.543 [job23] 00:21:33.543 filename=/dev/sdx 00:21:33.543 [job24] 00:21:33.543 filename=/dev/sdy 00:21:33.543 [job25] 00:21:33.543 filename=/dev/sdz 00:21:33.543 [job26] 00:21:33.543 filename=/dev/sdaa 00:21:33.543 [job27] 00:21:33.543 filename=/dev/sdab 00:21:33.543 [job28] 00:21:33.543 filename=/dev/sdac 00:21:33.543 [job29] 00:21:33.543 filename=/dev/sdad 00:21:34.113 queue_depth set to 113 (sda) 00:21:34.113 queue_depth set to 113 (sdb) 00:21:34.113 queue_depth set to 113 (sdc) 00:21:34.113 queue_depth set to 113 (sdd) 00:21:34.113 queue_depth set to 113 (sde) 00:21:34.113 queue_depth set to 113 (sdg) 00:21:34.113 queue_depth set to 113 (sdf) 00:21:34.373 queue_depth set to 113 (sdh) 00:21:34.373 queue_depth set to 113 (sdi) 00:21:34.373 queue_depth set to 113 (sdj) 00:21:34.373 queue_depth set to 113 (sdk) 00:21:34.373 queue_depth set to 113 (sdl) 00:21:34.373 queue_depth set to 113 (sdm) 00:21:34.373 queue_depth set to 113 (sdn) 00:21:34.373 queue_depth set to 113 (sdo) 00:21:34.373 queue_depth set to 113 (sdp) 00:21:34.373 queue_depth set to 113 (sdq) 00:21:34.373 queue_depth set to 113 (sdr) 00:21:34.373 queue_depth set to 113 (sds) 00:21:34.632 queue_depth set to 113 (sdt) 00:21:34.632 queue_depth set to 113 (sdu) 00:21:34.632 queue_depth set to 113 (sdv) 00:21:34.632 queue_depth set to 113 (sdw) 00:21:34.632 queue_depth set to 113 (sdx) 00:21:34.632 queue_depth set to 113 (sdy) 00:21:34.632 queue_depth set to 113 (sdz) 00:21:34.632 queue_depth set to 113 (sdaa) 00:21:34.632 queue_depth set to 113 (sdab) 00:21:34.632 queue_depth set to 113 (sdac) 00:21:34.632 queue_depth set to 113 (sdad) 00:21:34.892 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job16: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job17: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job18: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job19: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job20: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job21: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job22: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job23: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job24: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job25: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job26: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job27: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job28: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 job29: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:34.892 fio-3.35 00:21:34.892 Starting 30 threads 00:21:34.892 [2024-07-23 22:23:06.972163] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:06.979756] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:06.982475] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:06.985165] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:06.987964] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:06.990704] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:06.993767] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:06.996936] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:06.999933] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.003013] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.006087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.009411] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.012926] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.015939] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.018508] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.020939] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.023879] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.027776] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.030481] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.033043] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.035590] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.038116] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.042665] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.045304] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.047897] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.050571] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.053299] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.055793] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.058451] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.892 [2024-07-23 22:23:07.060967] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.042260] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.057923] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.065877] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.069146] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.072372] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.075044] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.078224] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.085278] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.088081] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.090678] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.093245] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.095958] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.098667] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.101121] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.103747] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.106308] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.109145] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.112106] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.115124] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.118123] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 [2024-07-23 22:23:13.121109] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.467 00:21:41.467 job0: (groupid=0, jobs=1): err= 0: pid=95398: Tue Jul 23 22:23:13 2024 00:21:41.467 read: IOPS=89, BW=11.2MiB/s (11.8MB/s)(60.2MiB/5356msec) 00:21:41.467 slat (usec): min=8, max=1341, avg=34.17, stdev=66.42 00:21:41.467 clat (msec): min=21, max=380, avg=49.58, stdev=35.00 00:21:41.467 lat (msec): min=21, max=380, avg=49.61, stdev=35.00 00:21:41.467 clat percentiles (msec): 00:21:41.467 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 41], 00:21:41.467 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.467 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 64], 95.00th=[ 91], 00:21:41.467 | 99.00th=[ 161], 99.50th=[ 372], 99.90th=[ 380], 99.95th=[ 380], 00:21:41.467 | 99.99th=[ 380] 00:21:41.467 bw ( KiB/s): min= 8704, max=19928, per=3.34%, avg=12226.10, stdev=3299.11, samples=10 00:21:41.467 iops : min= 68, max= 155, avg=95.20, stdev=25.71, samples=10 00:21:41.467 write: IOPS=96, BW=12.0MiB/s (12.6MB/s)(64.5MiB/5356msec); 0 zone resets 00:21:41.467 slat (usec): min=12, max=1543, avg=45.91, stdev=98.65 00:21:41.467 clat (msec): min=151, max=959, avg=617.11, stdev=92.05 00:21:41.467 lat (msec): min=151, max=959, avg=617.15, stdev=92.05 00:21:41.467 clat percentiles (msec): 00:21:41.467 | 1.00th=[ 253], 5.00th=[ 435], 10.00th=[ 558], 20.00th=[ 609], 00:21:41.467 | 30.00th=[ 617], 40.00th=[ 625], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.467 | 70.00th=[ 642], 80.00th=[ 642], 90.00th=[ 651], 95.00th=[ 709], 00:21:41.467 | 99.00th=[ 919], 99.50th=[ 927], 99.90th=[ 961], 99.95th=[ 961], 00:21:41.467 | 99.99th=[ 961] 00:21:41.467 bw ( KiB/s): min= 6387, max=12800, per=3.18%, avg=11690.40, stdev=1887.14, samples=10 00:21:41.467 iops : min= 49, max= 100, avg=91.00, stdev=14.96, samples=10 00:21:41.467 lat (msec) : 50=42.79%, 100=3.41%, 250=2.20%, 500=3.91%, 750=45.69% 00:21:41.467 lat (msec) : 1000=2.00% 00:21:41.467 cpu : usr=0.34%, sys=0.58%, ctx=596, majf=0, minf=1 00:21:41.467 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:21:41.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.467 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.467 issued rwts: total=482,516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.467 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.467 job1: (groupid=0, jobs=1): err= 0: pid=95401: Tue Jul 23 22:23:13 2024 00:21:41.467 read: IOPS=85, BW=10.7MiB/s (11.2MB/s)(57.4MiB/5351msec) 00:21:41.467 slat (usec): min=8, max=268, avg=33.10, stdev=21.39 00:21:41.467 clat (msec): min=30, max=365, avg=50.57, stdev=30.29 00:21:41.467 lat (msec): min=30, max=365, avg=50.60, stdev=30.29 00:21:41.467 clat percentiles (msec): 00:21:41.467 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 42], 00:21:41.467 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.467 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 86], 95.00th=[ 121], 00:21:41.467 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 368], 99.95th=[ 368], 00:21:41.467 | 99.99th=[ 368] 00:21:41.467 bw ( KiB/s): min= 7936, max=18725, per=3.20%, avg=11702.90, stdev=3142.32, samples=10 00:21:41.467 iops : min= 62, max= 146, avg=91.40, stdev=24.48, samples=10 00:21:41.467 write: IOPS=96, BW=12.1MiB/s (12.7MB/s)(64.9MiB/5351msec); 0 zone resets 00:21:41.467 slat (usec): min=13, max=126, avg=39.16, stdev=19.46 00:21:41.467 clat (msec): min=155, max=922, avg=614.30, stdev=93.62 00:21:41.467 lat (msec): min=156, max=922, avg=614.33, stdev=93.62 00:21:41.467 clat percentiles (msec): 00:21:41.467 | 1.00th=[ 251], 5.00th=[ 401], 10.00th=[ 531], 20.00th=[ 600], 00:21:41.467 | 30.00th=[ 617], 40.00th=[ 625], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.467 | 70.00th=[ 642], 80.00th=[ 651], 90.00th=[ 659], 95.00th=[ 709], 00:21:41.467 | 99.00th=[ 885], 99.50th=[ 902], 99.90th=[ 919], 99.95th=[ 919], 00:21:41.467 | 99.99th=[ 919] 00:21:41.467 bw ( KiB/s): min= 6669, max=12544, per=3.18%, avg=11700.50, stdev=1782.02, samples=10 00:21:41.467 iops : min= 52, max= 98, avg=91.40, stdev=13.95, samples=10 00:21:41.467 lat (msec) : 50=41.10%, 100=3.37%, 250=2.76%, 500=4.19%, 750=46.22% 00:21:41.467 lat (msec) : 1000=2.35% 00:21:41.467 cpu : usr=0.24%, sys=0.56%, ctx=564, majf=0, minf=1 00:21:41.467 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:21:41.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.467 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.467 issued rwts: total=459,519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.467 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.467 job2: (groupid=0, jobs=1): err= 0: pid=95423: Tue Jul 23 22:23:13 2024 00:21:41.467 read: IOPS=99, BW=12.4MiB/s (13.0MB/s)(66.4MiB/5347msec) 00:21:41.467 slat (usec): min=8, max=338, avg=32.17, stdev=21.12 00:21:41.467 clat (msec): min=30, max=375, avg=49.52, stdev=32.48 00:21:41.467 lat (msec): min=30, max=375, avg=49.55, stdev=32.47 00:21:41.467 clat percentiles (msec): 00:21:41.467 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:21:41.467 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.467 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 47], 95.00th=[ 112], 00:21:41.467 | 99.00th=[ 176], 99.50th=[ 355], 99.90th=[ 376], 99.95th=[ 376], 00:21:41.467 | 99.99th=[ 376] 00:21:41.467 bw ( KiB/s): min=10240, max=17408, per=3.69%, avg=13491.20, stdev=2397.18, samples=10 00:21:41.467 iops : min= 80, max= 136, avg=105.10, stdev=18.72, samples=10 00:21:41.467 write: IOPS=96, BW=12.1MiB/s (12.6MB/s)(64.5MiB/5347msec); 0 zone resets 00:21:41.467 slat (usec): min=11, max=1719, avg=41.61, stdev=77.44 00:21:41.467 clat (msec): min=159, max=953, avg=611.24, stdev=93.15 00:21:41.467 lat (msec): min=159, max=953, avg=611.28, stdev=93.14 00:21:41.468 clat percentiles (msec): 00:21:41.468 | 1.00th=[ 251], 5.00th=[ 405], 10.00th=[ 558], 20.00th=[ 592], 00:21:41.468 | 30.00th=[ 609], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.468 | 70.00th=[ 634], 80.00th=[ 642], 90.00th=[ 651], 95.00th=[ 726], 00:21:41.468 | 99.00th=[ 911], 99.50th=[ 944], 99.90th=[ 953], 99.95th=[ 953], 00:21:41.468 | 99.99th=[ 953] 00:21:41.468 bw ( KiB/s): min= 6656, max=12518, per=3.17%, avg=11651.90, stdev=1774.47, samples=10 00:21:41.468 iops : min= 52, max= 97, avg=90.70, stdev=13.77, samples=10 00:21:41.468 lat (msec) : 50=45.75%, 100=2.29%, 250=2.87%, 500=3.82%, 750=43.17% 00:21:41.468 lat (msec) : 1000=2.10% 00:21:41.468 cpu : usr=0.37%, sys=0.64%, ctx=549, majf=0, minf=1 00:21:41.468 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=94.0% 00:21:41.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.468 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.468 issued rwts: total=531,516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.468 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.468 job3: (groupid=0, jobs=1): err= 0: pid=95440: Tue Jul 23 22:23:13 2024 00:21:41.468 read: IOPS=89, BW=11.1MiB/s (11.7MB/s)(59.8MiB/5368msec) 00:21:41.468 slat (usec): min=6, max=166, avg=30.12, stdev=17.41 00:21:41.468 clat (msec): min=22, max=379, avg=49.32, stdev=31.92 00:21:41.468 lat (msec): min=22, max=379, avg=49.35, stdev=31.92 00:21:41.468 clat percentiles (msec): 00:21:41.468 | 1.00th=[ 28], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:21:41.468 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.468 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 64], 95.00th=[ 92], 00:21:41.468 | 99.00th=[ 167], 99.50th=[ 372], 99.90th=[ 380], 99.95th=[ 380], 00:21:41.468 | 99.99th=[ 380] 00:21:41.468 bw ( KiB/s): min= 8431, max=18944, per=3.32%, avg=12132.70, stdev=3188.68, samples=10 00:21:41.468 iops : min= 65, max= 148, avg=94.70, stdev=25.02, samples=10 00:21:41.468 write: IOPS=95, BW=12.0MiB/s (12.6MB/s)(64.4MiB/5368msec); 0 zone resets 00:21:41.468 slat (usec): min=12, max=1226, avg=38.38, stdev=54.91 00:21:41.468 clat (msec): min=157, max=989, avg=620.23, stdev=93.43 00:21:41.468 lat (msec): min=158, max=989, avg=620.27, stdev=93.42 00:21:41.468 clat percentiles (msec): 00:21:41.468 | 1.00th=[ 268], 5.00th=[ 439], 10.00th=[ 575], 20.00th=[ 609], 00:21:41.468 | 30.00th=[ 617], 40.00th=[ 625], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.468 | 70.00th=[ 634], 80.00th=[ 642], 90.00th=[ 659], 95.00th=[ 735], 00:21:41.468 | 99.00th=[ 944], 99.50th=[ 969], 99.90th=[ 986], 99.95th=[ 986], 00:21:41.468 | 99.99th=[ 986] 00:21:41.468 bw ( KiB/s): min= 6144, max=12544, per=3.17%, avg=11645.50, stdev=1945.94, samples=10 00:21:41.468 iops : min= 48, max= 98, avg=90.90, stdev=15.18, samples=10 00:21:41.468 lat (msec) : 50=42.60%, 100=3.42%, 250=2.22%, 500=3.63%, 750=45.72% 00:21:41.468 lat (msec) : 1000=2.42% 00:21:41.468 cpu : usr=0.22%, sys=0.58%, ctx=547, majf=0, minf=1 00:21:41.468 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:21:41.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.468 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.468 issued rwts: total=478,515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.468 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.468 job4: (groupid=0, jobs=1): err= 0: pid=95448: Tue Jul 23 22:23:13 2024 00:21:41.468 read: IOPS=94, BW=11.8MiB/s (12.4MB/s)(63.1MiB/5348msec) 00:21:41.468 slat (usec): min=8, max=893, avg=38.84, stdev=59.03 00:21:41.468 clat (msec): min=36, max=386, avg=51.02, stdev=29.19 00:21:41.468 lat (msec): min=37, max=386, avg=51.06, stdev=29.18 00:21:41.468 clat percentiles (msec): 00:21:41.468 | 1.00th=[ 40], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 42], 00:21:41.468 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.468 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 87], 95.00th=[ 105], 00:21:41.468 | 99.00th=[ 131], 99.50th=[ 163], 99.90th=[ 388], 99.95th=[ 388], 00:21:41.468 | 99.99th=[ 388] 00:21:41.468 bw ( KiB/s): min= 8669, max=25138, per=3.51%, avg=12862.10, stdev=4585.57, samples=10 00:21:41.468 iops : min= 67, max= 196, avg=100.10, stdev=35.89, samples=10 00:21:41.468 write: IOPS=96, BW=12.1MiB/s (12.7MB/s)(64.6MiB/5348msec); 0 zone resets 00:21:41.468 slat (usec): min=11, max=987, avg=51.01, stdev=87.60 00:21:41.468 clat (msec): min=150, max=944, avg=611.27, stdev=95.07 00:21:41.468 lat (msec): min=150, max=945, avg=611.32, stdev=95.08 00:21:41.468 clat percentiles (msec): 00:21:41.468 | 1.00th=[ 251], 5.00th=[ 405], 10.00th=[ 527], 20.00th=[ 592], 00:21:41.468 | 30.00th=[ 609], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.468 | 70.00th=[ 642], 80.00th=[ 642], 90.00th=[ 659], 95.00th=[ 709], 00:21:41.468 | 99.00th=[ 894], 99.50th=[ 927], 99.90th=[ 944], 99.95th=[ 944], 00:21:41.468 | 99.99th=[ 944] 00:21:41.468 bw ( KiB/s): min= 6669, max=12518, per=3.17%, avg=11653.20, stdev=1770.40, samples=10 00:21:41.468 iops : min= 52, max= 97, avg=90.70, stdev=13.77, samples=10 00:21:41.468 lat (msec) : 50=41.98%, 100=4.40%, 250=3.42%, 500=3.72%, 750=44.13% 00:21:41.468 lat (msec) : 1000=2.35% 00:21:41.468 cpu : usr=0.34%, sys=0.64%, ctx=621, majf=0, minf=1 00:21:41.468 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:21:41.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.468 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.468 issued rwts: total=505,517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.468 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.468 job5: (groupid=0, jobs=1): err= 0: pid=95449: Tue Jul 23 22:23:13 2024 00:21:41.468 read: IOPS=98, BW=12.3MiB/s (12.9MB/s)(65.8MiB/5350msec) 00:21:41.468 slat (usec): min=8, max=633, avg=34.43, stdev=42.34 00:21:41.468 clat (msec): min=31, max=369, avg=48.86, stdev=28.32 00:21:41.468 lat (msec): min=31, max=369, avg=48.90, stdev=28.31 00:21:41.468 clat percentiles (msec): 00:21:41.468 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:21:41.468 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.468 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 53], 95.00th=[ 91], 00:21:41.468 | 99.00th=[ 144], 99.50th=[ 174], 99.90th=[ 372], 99.95th=[ 372], 00:21:41.468 | 99.99th=[ 372] 00:21:41.468 bw ( KiB/s): min= 9728, max=19200, per=3.67%, avg=13414.40, stdev=3392.36, samples=10 00:21:41.468 iops : min= 76, max= 150, avg=104.80, stdev=26.50, samples=10 00:21:41.468 write: IOPS=96, BW=12.1MiB/s (12.7MB/s)(64.6MiB/5350msec); 0 zone resets 00:21:41.468 slat (usec): min=10, max=739, avg=43.90, stdev=50.40 00:21:41.468 clat (msec): min=161, max=958, avg=611.70, stdev=90.97 00:21:41.468 lat (msec): min=161, max=958, avg=611.74, stdev=90.98 00:21:41.468 clat percentiles (msec): 00:21:41.468 | 1.00th=[ 253], 5.00th=[ 422], 10.00th=[ 558], 20.00th=[ 600], 00:21:41.468 | 30.00th=[ 609], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 625], 00:21:41.468 | 70.00th=[ 634], 80.00th=[ 642], 90.00th=[ 651], 95.00th=[ 709], 00:21:41.468 | 99.00th=[ 902], 99.50th=[ 927], 99.90th=[ 961], 99.95th=[ 961], 00:21:41.469 | 99.99th=[ 961] 00:21:41.469 bw ( KiB/s): min= 6656, max=12544, per=3.17%, avg=11673.60, stdev=1774.44, samples=10 00:21:41.469 iops : min= 52, max= 98, avg=91.20, stdev=13.86, samples=10 00:21:41.469 lat (msec) : 50=45.35%, 100=2.78%, 250=2.59%, 500=3.64%, 750=43.53% 00:21:41.469 lat (msec) : 1000=2.11% 00:21:41.469 cpu : usr=0.26%, sys=0.71%, ctx=582, majf=0, minf=1 00:21:41.469 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=94.0% 00:21:41.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.469 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.469 issued rwts: total=526,517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.469 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.469 job6: (groupid=0, jobs=1): err= 0: pid=95450: Tue Jul 23 22:23:13 2024 00:21:41.469 read: IOPS=94, BW=11.8MiB/s (12.4MB/s)(63.4MiB/5357msec) 00:21:41.469 slat (usec): min=8, max=742, avg=36.17, stdev=50.17 00:21:41.469 clat (msec): min=21, max=384, avg=49.52, stdev=34.39 00:21:41.469 lat (msec): min=21, max=384, avg=49.55, stdev=34.39 00:21:41.469 clat percentiles (msec): 00:21:41.469 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:21:41.469 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.469 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 58], 95.00th=[ 90], 00:21:41.469 | 99.00th=[ 167], 99.50th=[ 376], 99.90th=[ 384], 99.95th=[ 384], 00:21:41.469 | 99.99th=[ 384] 00:21:41.469 bw ( KiB/s): min= 7424, max=18139, per=3.52%, avg=12872.30, stdev=3067.99, samples=10 00:21:41.469 iops : min= 58, max= 141, avg=100.40, stdev=23.76, samples=10 00:21:41.469 write: IOPS=95, BW=12.0MiB/s (12.6MB/s)(64.2MiB/5357msec); 0 zone resets 00:21:41.469 slat (usec): min=11, max=415, avg=38.58, stdev=30.10 00:21:41.469 clat (msec): min=152, max=974, avg=617.31, stdev=90.95 00:21:41.469 lat (msec): min=152, max=974, avg=617.35, stdev=90.95 00:21:41.469 clat percentiles (msec): 00:21:41.469 | 1.00th=[ 264], 5.00th=[ 447], 10.00th=[ 567], 20.00th=[ 600], 00:21:41.469 | 30.00th=[ 617], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.469 | 70.00th=[ 642], 80.00th=[ 651], 90.00th=[ 659], 95.00th=[ 709], 00:21:41.469 | 99.00th=[ 919], 99.50th=[ 953], 99.90th=[ 978], 99.95th=[ 978], 00:21:41.469 | 99.99th=[ 978] 00:21:41.469 bw ( KiB/s): min= 6387, max=12544, per=3.17%, avg=11646.60, stdev=1862.89, samples=10 00:21:41.469 iops : min= 49, max= 98, avg=90.80, stdev=14.80, samples=10 00:21:41.469 lat (msec) : 50=44.37%, 100=3.23%, 250=2.06%, 500=3.72%, 750=44.47% 00:21:41.469 lat (msec) : 1000=2.15% 00:21:41.469 cpu : usr=0.35%, sys=0.60%, ctx=601, majf=0, minf=1 00:21:41.469 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:21:41.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.469 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.469 issued rwts: total=507,514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.469 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.469 job7: (groupid=0, jobs=1): err= 0: pid=95465: Tue Jul 23 22:23:13 2024 00:21:41.469 read: IOPS=88, BW=11.0MiB/s (11.6MB/s)(59.0MiB/5348msec) 00:21:41.469 slat (usec): min=8, max=584, avg=33.34, stdev=41.63 00:21:41.469 clat (msec): min=31, max=372, avg=51.03, stdev=32.52 00:21:41.469 lat (msec): min=31, max=372, avg=51.06, stdev=32.52 00:21:41.469 clat percentiles (msec): 00:21:41.469 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:21:41.469 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.469 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 77], 95.00th=[ 110], 00:21:41.469 | 99.00th=[ 146], 99.50th=[ 351], 99.90th=[ 372], 99.95th=[ 372], 00:21:41.469 | 99.99th=[ 372] 00:21:41.469 bw ( KiB/s): min= 7936, max=18176, per=3.27%, avg=11981.30, stdev=3137.81, samples=10 00:21:41.469 iops : min= 62, max= 142, avg=93.30, stdev=24.49, samples=10 00:21:41.469 write: IOPS=96, BW=12.1MiB/s (12.7MB/s)(64.6MiB/5348msec); 0 zone resets 00:21:41.469 slat (usec): min=14, max=551, avg=38.36, stdev=30.38 00:21:41.469 clat (msec): min=154, max=982, avg=614.52, stdev=94.52 00:21:41.469 lat (msec): min=155, max=982, avg=614.56, stdev=94.52 00:21:41.469 clat percentiles (msec): 00:21:41.469 | 1.00th=[ 255], 5.00th=[ 409], 10.00th=[ 535], 20.00th=[ 600], 00:21:41.469 | 30.00th=[ 617], 40.00th=[ 625], 50.00th=[ 634], 60.00th=[ 634], 00:21:41.469 | 70.00th=[ 642], 80.00th=[ 651], 90.00th=[ 659], 95.00th=[ 718], 00:21:41.469 | 99.00th=[ 911], 99.50th=[ 944], 99.90th=[ 986], 99.95th=[ 986], 00:21:41.469 | 99.99th=[ 986] 00:21:41.469 bw ( KiB/s): min= 6656, max=12494, per=3.18%, avg=11677.60, stdev=1778.35, samples=10 00:21:41.469 iops : min= 52, max= 97, avg=90.90, stdev=13.80, samples=10 00:21:41.469 lat (msec) : 50=41.56%, 100=2.93%, 250=3.44%, 500=4.15%, 750=45.60% 00:21:41.469 lat (msec) : 1000=2.33% 00:21:41.469 cpu : usr=0.37%, sys=0.52%, ctx=572, majf=0, minf=1 00:21:41.469 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.6% 00:21:41.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.469 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.469 issued rwts: total=472,517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.469 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.469 job8: (groupid=0, jobs=1): err= 0: pid=95499: Tue Jul 23 22:23:13 2024 00:21:41.469 read: IOPS=86, BW=10.8MiB/s (11.3MB/s)(58.0MiB/5367msec) 00:21:41.469 slat (nsec): min=6357, max=82545, avg=25740.34, stdev=13727.70 00:21:41.469 clat (msec): min=2, max=399, avg=50.61, stdev=46.17 00:21:41.469 lat (msec): min=2, max=399, avg=50.64, stdev=46.17 00:21:41.469 clat percentiles (msec): 00:21:41.469 | 1.00th=[ 7], 5.00th=[ 30], 10.00th=[ 40], 20.00th=[ 41], 00:21:41.469 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:21:41.469 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 46], 95.00th=[ 138], 00:21:41.469 | 99.00th=[ 372], 99.50th=[ 380], 99.90th=[ 401], 99.95th=[ 401], 00:21:41.469 | 99.99th=[ 401] 00:21:41.469 bw ( KiB/s): min= 7424, max=18688, per=3.20%, avg=11720.10, stdev=2936.09, samples=10 00:21:41.469 iops : min= 58, max= 146, avg=91.40, stdev=22.96, samples=10 00:21:41.469 write: IOPS=95, BW=12.0MiB/s (12.6MB/s)(64.2MiB/5367msec); 0 zone resets 00:21:41.469 slat (usec): min=8, max=139, avg=32.87, stdev=15.21 00:21:41.469 clat (msec): min=44, max=959, avg=621.72, stdev=94.77 00:21:41.469 lat (msec): min=44, max=959, avg=621.75, stdev=94.77 00:21:41.469 clat percentiles (msec): 00:21:41.469 | 1.00th=[ 211], 5.00th=[ 443], 10.00th=[ 584], 20.00th=[ 609], 00:21:41.469 | 30.00th=[ 617], 40.00th=[ 625], 50.00th=[ 634], 60.00th=[ 634], 00:21:41.469 | 70.00th=[ 642], 80.00th=[ 651], 90.00th=[ 676], 95.00th=[ 718], 00:21:41.469 | 99.00th=[ 911], 99.50th=[ 936], 99.90th=[ 961], 99.95th=[ 961], 00:21:41.469 | 99.99th=[ 961] 00:21:41.469 bw ( KiB/s): min= 6656, max=12800, per=3.18%, avg=11694.10, stdev=1800.10, samples=10 00:21:41.469 iops : min= 52, max= 100, avg=91.20, stdev=14.00, samples=10 00:21:41.469 lat (msec) : 4=0.31%, 10=0.82%, 20=1.02%, 50=41.41%, 100=1.43% 00:21:41.469 lat (msec) : 250=2.56%, 500=3.68%, 750=46.52%, 1000=2.25% 00:21:41.469 cpu : usr=0.17%, sys=0.50%, ctx=575, majf=0, minf=1 00:21:41.469 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:21:41.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.469 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.469 issued rwts: total=464,514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.469 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.469 job9: (groupid=0, jobs=1): err= 0: pid=95573: Tue Jul 23 22:23:13 2024 00:21:41.469 read: IOPS=100, BW=12.5MiB/s (13.1MB/s)(67.1MiB/5360msec) 00:21:41.469 slat (usec): min=6, max=123, avg=22.76, stdev=10.79 00:21:41.469 clat (msec): min=21, max=389, avg=52.35, stdev=36.28 00:21:41.469 lat (msec): min=21, max=389, avg=52.38, stdev=36.28 00:21:41.469 clat percentiles (msec): 00:21:41.469 | 1.00th=[ 27], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:21:41.469 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.469 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 86], 95.00th=[ 107], 00:21:41.469 | 99.00th=[ 157], 99.50th=[ 380], 99.90th=[ 388], 99.95th=[ 388], 00:21:41.469 | 99.99th=[ 388] 00:21:41.469 bw ( KiB/s): min=10496, max=30268, per=3.73%, avg=13647.70, stdev=6067.14, samples=10 00:21:41.469 iops : min= 82, max= 236, avg=106.50, stdev=47.24, samples=10 00:21:41.469 write: IOPS=95, BW=12.0MiB/s (12.5MB/s)(64.1MiB/5360msec); 0 zone resets 00:21:41.469 slat (usec): min=12, max=955, avg=31.30, stdev=42.37 00:21:41.469 clat (msec): min=151, max=990, avg=612.90, stdev=94.55 00:21:41.469 lat (msec): min=151, max=990, avg=612.93, stdev=94.54 00:21:41.469 clat percentiles (msec): 00:21:41.470 | 1.00th=[ 268], 5.00th=[ 439], 10.00th=[ 527], 20.00th=[ 592], 00:21:41.470 | 30.00th=[ 617], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.470 | 70.00th=[ 634], 80.00th=[ 642], 90.00th=[ 659], 95.00th=[ 709], 00:21:41.470 | 99.00th=[ 911], 99.50th=[ 936], 99.90th=[ 995], 99.95th=[ 995], 00:21:41.470 | 99.99th=[ 995] 00:21:41.470 bw ( KiB/s): min= 6156, max=12544, per=3.16%, avg=11621.10, stdev=1930.68, samples=10 00:21:41.470 iops : min= 48, max= 98, avg=90.70, stdev=15.09, samples=10 00:21:41.470 lat (msec) : 50=42.76%, 100=4.95%, 250=3.43%, 500=3.71%, 750=42.95% 00:21:41.470 lat (msec) : 1000=2.19% 00:21:41.470 cpu : usr=0.34%, sys=0.34%, ctx=593, majf=0, minf=1 00:21:41.470 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:21:41.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.470 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.470 issued rwts: total=537,513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.470 job10: (groupid=0, jobs=1): err= 0: pid=95593: Tue Jul 23 22:23:13 2024 00:21:41.470 read: IOPS=100, BW=12.6MiB/s (13.2MB/s)(67.8MiB/5387msec) 00:21:41.470 slat (usec): min=8, max=1721, avg=34.80, stdev=78.23 00:21:41.470 clat (msec): min=4, max=403, avg=49.65, stdev=31.25 00:21:41.470 lat (msec): min=4, max=403, avg=49.69, stdev=31.25 00:21:41.470 clat percentiles (msec): 00:21:41.470 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:21:41.470 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.470 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 60], 95.00th=[ 94], 00:21:41.470 | 99.00th=[ 176], 99.50th=[ 197], 99.90th=[ 405], 99.95th=[ 405], 00:21:41.470 | 99.99th=[ 405] 00:21:41.470 bw ( KiB/s): min= 8942, max=20950, per=3.77%, avg=13812.50, stdev=3853.00, samples=10 00:21:41.470 iops : min= 69, max= 163, avg=107.60, stdev=30.11, samples=10 00:21:41.470 write: IOPS=95, BW=12.0MiB/s (12.6MB/s)(64.6MiB/5387msec); 0 zone resets 00:21:41.470 slat (usec): min=10, max=1099, avg=42.06, stdev=61.45 00:21:41.470 clat (msec): min=9, max=963, avg=613.86, stdev=99.07 00:21:41.470 lat (msec): min=9, max=963, avg=613.90, stdev=99.07 00:21:41.470 clat percentiles (msec): 00:21:41.470 | 1.00th=[ 220], 5.00th=[ 435], 10.00th=[ 567], 20.00th=[ 600], 00:21:41.470 | 30.00th=[ 609], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 625], 00:21:41.470 | 70.00th=[ 634], 80.00th=[ 651], 90.00th=[ 659], 95.00th=[ 743], 00:21:41.470 | 99.00th=[ 911], 99.50th=[ 944], 99.90th=[ 961], 99.95th=[ 961], 00:21:41.470 | 99.99th=[ 961] 00:21:41.470 bw ( KiB/s): min= 6387, max=12544, per=3.17%, avg=11664.70, stdev=1867.38, samples=10 00:21:41.470 iops : min= 49, max= 98, avg=90.80, stdev=14.78, samples=10 00:21:41.470 lat (msec) : 10=0.28%, 20=0.09%, 50=45.42%, 100=3.31%, 250=2.55% 00:21:41.470 lat (msec) : 500=3.21%, 750=42.78%, 1000=2.36% 00:21:41.470 cpu : usr=0.22%, sys=0.72%, ctx=628, majf=0, minf=1 00:21:41.470 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.1% 00:21:41.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.470 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.470 issued rwts: total=542,517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.470 job11: (groupid=0, jobs=1): err= 0: pid=95600: Tue Jul 23 22:23:13 2024 00:21:41.470 read: IOPS=93, BW=11.7MiB/s (12.3MB/s)(62.8MiB/5349msec) 00:21:41.470 slat (usec): min=8, max=603, avg=31.23, stdev=38.72 00:21:41.470 clat (msec): min=31, max=370, avg=51.93, stdev=31.82 00:21:41.470 lat (msec): min=31, max=370, avg=51.96, stdev=31.82 00:21:41.470 clat percentiles (msec): 00:21:41.470 | 1.00th=[ 38], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:21:41.470 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.470 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 88], 95.00th=[ 126], 00:21:41.470 | 99.00th=[ 150], 99.50th=[ 155], 99.90th=[ 372], 99.95th=[ 372], 00:21:41.470 | 99.99th=[ 372] 00:21:41.470 bw ( KiB/s): min= 8448, max=21760, per=3.49%, avg=12780.00, stdev=3760.99, samples=10 00:21:41.470 iops : min= 66, max= 170, avg=99.60, stdev=29.37, samples=10 00:21:41.470 write: IOPS=96, BW=12.1MiB/s (12.7MB/s)(64.8MiB/5349msec); 0 zone resets 00:21:41.470 slat (usec): min=13, max=501, avg=36.81, stdev=33.16 00:21:41.470 clat (msec): min=155, max=949, avg=609.75, stdev=98.93 00:21:41.470 lat (msec): min=155, max=949, avg=609.79, stdev=98.93 00:21:41.470 clat percentiles (msec): 00:21:41.470 | 1.00th=[ 251], 5.00th=[ 409], 10.00th=[ 481], 20.00th=[ 592], 00:21:41.470 | 30.00th=[ 609], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.470 | 70.00th=[ 642], 80.00th=[ 651], 90.00th=[ 659], 95.00th=[ 701], 00:21:41.470 | 99.00th=[ 894], 99.50th=[ 936], 99.90th=[ 953], 99.95th=[ 953], 00:21:41.470 | 99.99th=[ 953] 00:21:41.470 bw ( KiB/s): min= 6656, max=12494, per=3.18%, avg=11677.30, stdev=1772.37, samples=10 00:21:41.470 iops : min= 52, max= 97, avg=90.90, stdev=13.76, samples=10 00:21:41.470 lat (msec) : 50=42.25%, 100=2.94%, 250=4.31%, 500=5.10%, 750=43.24% 00:21:41.470 lat (msec) : 1000=2.16% 00:21:41.470 cpu : usr=0.30%, sys=0.60%, ctx=578, majf=0, minf=1 00:21:41.470 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:21:41.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.470 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.470 issued rwts: total=502,518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.470 job12: (groupid=0, jobs=1): err= 0: pid=95601: Tue Jul 23 22:23:13 2024 00:21:41.470 read: IOPS=102, BW=12.8MiB/s (13.4MB/s)(68.6MiB/5364msec) 00:21:41.470 slat (usec): min=8, max=318, avg=26.37, stdev=26.17 00:21:41.470 clat (msec): min=26, max=196, avg=49.32, stdev=24.20 00:21:41.470 lat (msec): min=26, max=196, avg=49.35, stdev=24.19 00:21:41.470 clat percentiles (msec): 00:21:41.470 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:21:41.470 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.470 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 67], 95.00th=[ 112], 00:21:41.470 | 99.00th=[ 153], 99.50th=[ 182], 99.90th=[ 197], 99.95th=[ 197], 00:21:41.470 | 99.99th=[ 197] 00:21:41.470 bw ( KiB/s): min= 8448, max=23552, per=3.84%, avg=14043.90, stdev=4355.89, samples=10 00:21:41.470 iops : min= 66, max= 184, avg=109.40, stdev=34.14, samples=10 00:21:41.470 write: IOPS=96, BW=12.1MiB/s (12.7MB/s)(64.8MiB/5364msec); 0 zone resets 00:21:41.470 slat (usec): min=10, max=490, avg=33.27, stdev=34.07 00:21:41.470 clat (msec): min=166, max=962, avg=609.56, stdev=93.90 00:21:41.470 lat (msec): min=166, max=963, avg=609.60, stdev=93.90 00:21:41.470 clat percentiles (msec): 00:21:41.470 | 1.00th=[ 271], 5.00th=[ 426], 10.00th=[ 514], 20.00th=[ 592], 00:21:41.470 | 30.00th=[ 609], 40.00th=[ 617], 50.00th=[ 617], 60.00th=[ 625], 00:21:41.470 | 70.00th=[ 634], 80.00th=[ 642], 90.00th=[ 651], 95.00th=[ 726], 00:21:41.470 | 99.00th=[ 911], 99.50th=[ 936], 99.90th=[ 961], 99.95th=[ 961], 00:21:41.470 | 99.99th=[ 961] 00:21:41.470 bw ( KiB/s): min= 6400, max=12518, per=3.17%, avg=11637.90, stdev=1850.67, samples=10 00:21:41.470 iops : min= 50, max= 97, avg=90.60, stdev=14.33, samples=10 00:21:41.470 lat (msec) : 50=45.55%, 100=2.81%, 250=3.47%, 500=3.47%, 750=42.46% 00:21:41.470 lat (msec) : 1000=2.25% 00:21:41.470 cpu : usr=0.26%, sys=0.48%, ctx=610, majf=0, minf=1 00:21:41.470 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.1% 00:21:41.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.470 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.470 issued rwts: total=549,518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.470 job13: (groupid=0, jobs=1): err= 0: pid=95602: Tue Jul 23 22:23:13 2024 00:21:41.470 read: IOPS=98, BW=12.4MiB/s (13.0MB/s)(66.2MiB/5354msec) 00:21:41.470 slat (usec): min=8, max=954, avg=30.11, stdev=51.54 00:21:41.470 clat (msec): min=30, max=166, avg=48.31, stdev=20.14 00:21:41.470 lat (msec): min=30, max=166, avg=48.34, stdev=20.14 00:21:41.470 clat percentiles (msec): 00:21:41.470 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:21:41.470 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.470 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 65], 95.00th=[ 97], 00:21:41.470 | 99.00th=[ 138], 99.50th=[ 150], 99.90th=[ 167], 99.95th=[ 167], 00:21:41.470 | 99.99th=[ 167] 00:21:41.470 bw ( KiB/s): min= 9472, max=22829, per=3.70%, avg=13544.40, stdev=3901.87, samples=10 00:21:41.470 iops : min= 74, max= 178, avg=105.70, stdev=30.43, samples=10 00:21:41.470 write: IOPS=97, BW=12.1MiB/s (12.7MB/s)(65.0MiB/5354msec); 0 zone resets 00:21:41.470 slat (usec): min=11, max=1106, avg=47.69, stdev=100.79 00:21:41.470 clat (msec): min=155, max=952, avg=608.90, stdev=93.23 00:21:41.470 lat (msec): min=155, max=952, avg=608.95, stdev=93.23 00:21:41.470 clat percentiles (msec): 00:21:41.470 | 1.00th=[ 255], 5.00th=[ 405], 10.00th=[ 502], 20.00th=[ 600], 00:21:41.470 | 30.00th=[ 609], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 625], 00:21:41.470 | 70.00th=[ 634], 80.00th=[ 642], 90.00th=[ 659], 95.00th=[ 693], 00:21:41.470 | 99.00th=[ 894], 99.50th=[ 919], 99.90th=[ 953], 99.95th=[ 953], 00:21:41.470 | 99.99th=[ 953] 00:21:41.470 bw ( KiB/s): min= 6669, max=12544, per=3.18%, avg=11698.00, stdev=1781.12, samples=10 00:21:41.470 iops : min= 52, max= 98, avg=91.30, stdev=13.92, samples=10 00:21:41.470 lat (msec) : 50=44.10%, 100=3.90%, 250=2.95%, 500=4.38%, 750=42.86% 00:21:41.470 lat (msec) : 1000=1.81% 00:21:41.470 cpu : usr=0.22%, sys=0.50%, ctx=678, majf=0, minf=1 00:21:41.470 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:21:41.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.470 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.470 issued rwts: total=530,520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.470 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.470 job14: (groupid=0, jobs=1): err= 0: pid=95603: Tue Jul 23 22:23:13 2024 00:21:41.470 read: IOPS=93, BW=11.7MiB/s (12.3MB/s)(63.0MiB/5367msec) 00:21:41.471 slat (usec): min=8, max=401, avg=32.92, stdev=25.42 00:21:41.471 clat (msec): min=27, max=387, avg=49.10, stdev=30.76 00:21:41.471 lat (msec): min=27, max=387, avg=49.14, stdev=30.76 00:21:41.471 clat percentiles (msec): 00:21:41.471 | 1.00th=[ 38], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:21:41.471 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.471 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 50], 95.00th=[ 99], 00:21:41.471 | 99.00th=[ 171], 99.50th=[ 203], 99.90th=[ 388], 99.95th=[ 388], 00:21:41.471 | 99.99th=[ 388] 00:21:41.471 bw ( KiB/s): min= 9728, max=15647, per=3.50%, avg=12823.60, stdev=1857.91, samples=10 00:21:41.471 iops : min= 76, max= 122, avg=100.00, stdev=14.52, samples=10 00:21:41.471 write: IOPS=96, BW=12.0MiB/s (12.6MB/s)(64.5MiB/5367msec); 0 zone resets 00:21:41.471 slat (usec): min=12, max=1097, avg=40.20, stdev=51.81 00:21:41.471 clat (msec): min=172, max=945, avg=616.79, stdev=89.01 00:21:41.471 lat (msec): min=172, max=945, avg=616.83, stdev=89.02 00:21:41.471 clat percentiles (msec): 00:21:41.471 | 1.00th=[ 279], 5.00th=[ 435], 10.00th=[ 567], 20.00th=[ 609], 00:21:41.471 | 30.00th=[ 617], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.471 | 70.00th=[ 634], 80.00th=[ 642], 90.00th=[ 659], 95.00th=[ 701], 00:21:41.471 | 99.00th=[ 911], 99.50th=[ 927], 99.90th=[ 944], 99.95th=[ 944], 00:21:41.471 | 99.99th=[ 944] 00:21:41.471 bw ( KiB/s): min= 6156, max=12544, per=3.17%, avg=11644.20, stdev=1941.27, samples=10 00:21:41.471 iops : min= 48, max= 98, avg=90.80, stdev=15.14, samples=10 00:21:41.471 lat (msec) : 50=44.51%, 100=2.45%, 250=2.65%, 500=3.73%, 750=44.61% 00:21:41.471 lat (msec) : 1000=2.06% 00:21:41.471 cpu : usr=0.34%, sys=0.63%, ctx=576, majf=0, minf=1 00:21:41.471 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:21:41.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.471 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.471 issued rwts: total=504,516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.471 job15: (groupid=0, jobs=1): err= 0: pid=95604: Tue Jul 23 22:23:13 2024 00:21:41.471 read: IOPS=90, BW=11.3MiB/s (11.8MB/s)(60.2MiB/5348msec) 00:21:41.471 slat (usec): min=6, max=110, avg=25.41, stdev=13.82 00:21:41.471 clat (msec): min=31, max=378, avg=52.58, stdev=37.45 00:21:41.471 lat (msec): min=31, max=378, avg=52.61, stdev=37.45 00:21:41.471 clat percentiles (msec): 00:21:41.471 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:21:41.471 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.471 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 84], 95.00th=[ 129], 00:21:41.471 | 99.00th=[ 157], 99.50th=[ 359], 99.90th=[ 380], 99.95th=[ 380], 00:21:41.471 | 99.99th=[ 380] 00:21:41.471 bw ( KiB/s): min= 7936, max=19456, per=3.34%, avg=12236.80, stdev=3436.30, samples=10 00:21:41.471 iops : min= 62, max= 152, avg=95.60, stdev=26.85, samples=10 00:21:41.471 write: IOPS=96, BW=12.0MiB/s (12.6MB/s)(64.4MiB/5348msec); 0 zone resets 00:21:41.471 slat (usec): min=9, max=185, avg=31.53, stdev=14.16 00:21:41.471 clat (msec): min=160, max=956, avg=614.51, stdev=91.91 00:21:41.471 lat (msec): min=160, max=956, avg=614.55, stdev=91.91 00:21:41.471 clat percentiles (msec): 00:21:41.471 | 1.00th=[ 257], 5.00th=[ 426], 10.00th=[ 527], 20.00th=[ 600], 00:21:41.471 | 30.00th=[ 609], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.471 | 70.00th=[ 642], 80.00th=[ 651], 90.00th=[ 667], 95.00th=[ 701], 00:21:41.471 | 99.00th=[ 877], 99.50th=[ 936], 99.90th=[ 961], 99.95th=[ 961], 00:21:41.471 | 99.99th=[ 961] 00:21:41.471 bw ( KiB/s): min= 6400, max=12544, per=3.17%, avg=11673.60, stdev=1866.44, samples=10 00:21:41.471 iops : min= 50, max= 98, avg=91.20, stdev=14.58, samples=10 00:21:41.471 lat (msec) : 50=41.73%, 100=2.71%, 250=4.01%, 500=3.81%, 750=45.64% 00:21:41.471 lat (msec) : 1000=2.11% 00:21:41.471 cpu : usr=0.17%, sys=0.52%, ctx=566, majf=0, minf=1 00:21:41.471 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:21:41.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.471 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.471 issued rwts: total=482,515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.471 job16: (groupid=0, jobs=1): err= 0: pid=95605: Tue Jul 23 22:23:13 2024 00:21:41.471 read: IOPS=100, BW=12.6MiB/s (13.2MB/s)(67.6MiB/5367msec) 00:21:41.471 slat (usec): min=8, max=584, avg=30.89, stdev=44.22 00:21:41.471 clat (msec): min=4, max=388, avg=51.15, stdev=41.87 00:21:41.471 lat (msec): min=4, max=388, avg=51.18, stdev=41.87 00:21:41.471 clat percentiles (msec): 00:21:41.471 | 1.00th=[ 9], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 41], 00:21:41.471 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.471 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 59], 95.00th=[ 140], 00:21:41.471 | 99.00th=[ 194], 99.50th=[ 368], 99.90th=[ 388], 99.95th=[ 388], 00:21:41.471 | 99.99th=[ 388] 00:21:41.471 bw ( KiB/s): min= 9490, max=25856, per=3.74%, avg=13697.80, stdev=4739.02, samples=10 00:21:41.471 iops : min= 74, max= 202, avg=107.00, stdev=37.04, samples=10 00:21:41.471 write: IOPS=95, BW=11.9MiB/s (12.5MB/s)(64.1MiB/5367msec); 0 zone resets 00:21:41.471 slat (usec): min=13, max=1134, avg=40.03, stdev=67.40 00:21:41.471 clat (msec): min=78, max=978, avg=614.47, stdev=92.36 00:21:41.471 lat (msec): min=78, max=978, avg=614.51, stdev=92.35 00:21:41.471 clat percentiles (msec): 00:21:41.471 | 1.00th=[ 236], 5.00th=[ 439], 10.00th=[ 542], 20.00th=[ 600], 00:21:41.471 | 30.00th=[ 617], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.471 | 70.00th=[ 634], 80.00th=[ 642], 90.00th=[ 651], 95.00th=[ 709], 00:21:41.471 | 99.00th=[ 919], 99.50th=[ 961], 99.90th=[ 978], 99.95th=[ 978], 00:21:41.471 | 99.99th=[ 978] 00:21:41.471 bw ( KiB/s): min= 6400, max=12569, per=3.17%, avg=11650.50, stdev=1867.99, samples=10 00:21:41.471 iops : min= 50, max= 98, avg=91.00, stdev=14.58, samples=10 00:21:41.471 lat (msec) : 10=0.57%, 20=0.76%, 50=44.12%, 100=3.32%, 250=2.66% 00:21:41.471 lat (msec) : 500=3.13%, 750=43.55%, 1000=1.90% 00:21:41.471 cpu : usr=0.22%, sys=0.50%, ctx=748, majf=0, minf=1 00:21:41.471 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:21:41.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.471 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.471 issued rwts: total=541,513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.471 job17: (groupid=0, jobs=1): err= 0: pid=95606: Tue Jul 23 22:23:13 2024 00:21:41.471 read: IOPS=90, BW=11.3MiB/s (11.9MB/s)(60.6MiB/5351msec) 00:21:41.471 slat (usec): min=10, max=690, avg=43.69, stdev=68.73 00:21:41.471 clat (msec): min=21, max=373, avg=51.96, stdev=38.24 00:21:41.471 lat (msec): min=21, max=373, avg=52.00, stdev=38.24 00:21:41.471 clat percentiles (msec): 00:21:41.471 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:21:41.471 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:21:41.471 | 70.00th=[ 43], 80.00th=[ 45], 90.00th=[ 82], 95.00th=[ 107], 00:21:41.471 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 376], 99.95th=[ 376], 00:21:41.471 | 99.99th=[ 376] 00:21:41.471 bw ( KiB/s): min= 7936, max=23552, per=3.35%, avg=12261.40, stdev=4604.87, samples=10 00:21:41.471 iops : min= 62, max= 184, avg=95.60, stdev=36.09, samples=10 00:21:41.471 write: IOPS=96, BW=12.0MiB/s (12.6MB/s)(64.2MiB/5351msec); 0 zone resets 00:21:41.471 slat (usec): min=13, max=1408, avg=56.13, stdev=101.20 00:21:41.471 clat (msec): min=159, max=952, avg=615.97, stdev=92.11 00:21:41.471 lat (msec): min=159, max=952, avg=616.02, stdev=92.10 00:21:41.471 clat percentiles (msec): 00:21:41.471 | 1.00th=[ 257], 5.00th=[ 435], 10.00th=[ 550], 20.00th=[ 600], 00:21:41.471 | 30.00th=[ 617], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.471 | 70.00th=[ 642], 80.00th=[ 651], 90.00th=[ 667], 95.00th=[ 709], 00:21:41.471 | 99.00th=[ 894], 99.50th=[ 936], 99.90th=[ 953], 99.95th=[ 953], 00:21:41.471 | 99.99th=[ 953] 00:21:41.471 bw ( KiB/s): min= 6400, max=12569, per=3.17%, avg=11671.10, stdev=1865.96, samples=10 00:21:41.471 iops : min= 50, max= 98, avg=91.00, stdev=14.51, samples=10 00:21:41.471 lat (msec) : 50=41.44%, 100=3.80%, 250=3.30%, 500=3.90%, 750=45.35% 00:21:41.471 lat (msec) : 1000=2.20% 00:21:41.471 cpu : usr=0.09%, sys=0.71%, ctx=744, majf=0, minf=1 00:21:41.471 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:21:41.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.471 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.471 issued rwts: total=485,514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.471 job18: (groupid=0, jobs=1): err= 0: pid=95607: Tue Jul 23 22:23:13 2024 00:21:41.471 read: IOPS=96, BW=12.1MiB/s (12.7MB/s)(64.9MiB/5363msec) 00:21:41.471 slat (usec): min=9, max=469, avg=35.95, stdev=50.78 00:21:41.471 clat (msec): min=8, max=401, avg=49.15, stdev=35.94 00:21:41.471 lat (msec): min=8, max=401, avg=49.19, stdev=35.93 00:21:41.471 clat percentiles (msec): 00:21:41.471 | 1.00th=[ 14], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 41], 00:21:41.471 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.471 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 48], 95.00th=[ 91], 00:21:41.471 | 99.00th=[ 171], 99.50th=[ 372], 99.90th=[ 401], 99.95th=[ 401], 00:21:41.471 | 99.99th=[ 401] 00:21:41.471 bw ( KiB/s): min=11520, max=18981, per=3.60%, avg=13185.00, stdev=2200.78, samples=10 00:21:41.471 iops : min= 90, max= 148, avg=102.90, stdev=17.12, samples=10 00:21:41.471 write: IOPS=95, BW=12.0MiB/s (12.5MB/s)(64.1MiB/5363msec); 0 zone resets 00:21:41.471 slat (usec): min=12, max=486, avg=40.62, stdev=48.70 00:21:41.471 clat (msec): min=152, max=971, avg=618.38, stdev=92.86 00:21:41.471 lat (msec): min=152, max=971, avg=618.42, stdev=92.87 00:21:41.471 clat percentiles (msec): 00:21:41.471 | 1.00th=[ 275], 5.00th=[ 439], 10.00th=[ 584], 20.00th=[ 609], 00:21:41.471 | 30.00th=[ 617], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 625], 00:21:41.472 | 70.00th=[ 634], 80.00th=[ 642], 90.00th=[ 667], 95.00th=[ 718], 00:21:41.472 | 99.00th=[ 919], 99.50th=[ 953], 99.90th=[ 969], 99.95th=[ 969], 00:21:41.472 | 99.99th=[ 969] 00:21:41.472 bw ( KiB/s): min= 6156, max=12544, per=3.16%, avg=11621.10, stdev=1938.21, samples=10 00:21:41.472 iops : min= 48, max= 98, avg=90.70, stdev=15.14, samples=10 00:21:41.472 lat (msec) : 10=0.29%, 20=0.29%, 50=45.16%, 100=2.52%, 250=2.03% 00:21:41.472 lat (msec) : 500=3.59%, 750=43.99%, 1000=2.13% 00:21:41.472 cpu : usr=0.17%, sys=0.50%, ctx=764, majf=0, minf=1 00:21:41.472 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:21:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.472 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.472 issued rwts: total=519,513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.472 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.472 job19: (groupid=0, jobs=1): err= 0: pid=95608: Tue Jul 23 22:23:13 2024 00:21:41.472 read: IOPS=104, BW=13.1MiB/s (13.7MB/s)(70.2MiB/5380msec) 00:21:41.472 slat (usec): min=8, max=134, avg=29.53, stdev=15.53 00:21:41.472 clat (msec): min=15, max=397, avg=49.00, stdev=27.81 00:21:41.472 lat (msec): min=15, max=397, avg=49.03, stdev=27.81 00:21:41.472 clat percentiles (msec): 00:21:41.472 | 1.00th=[ 27], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:21:41.472 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.472 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 59], 95.00th=[ 101], 00:21:41.472 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 397], 99.95th=[ 397], 00:21:41.472 | 99.99th=[ 397] 00:21:41.472 bw ( KiB/s): min=11520, max=22528, per=3.92%, avg=14359.00, stdev=3269.89, samples=10 00:21:41.472 iops : min= 90, max= 176, avg=112.10, stdev=25.60, samples=10 00:21:41.472 write: IOPS=95, BW=12.0MiB/s (12.6MB/s)(64.5MiB/5380msec); 0 zone resets 00:21:41.472 slat (usec): min=15, max=7212, avg=50.02, stdev=316.56 00:21:41.472 clat (msec): min=165, max=987, avg=612.13, stdev=93.59 00:21:41.472 lat (msec): min=172, max=987, avg=612.18, stdev=93.53 00:21:41.472 clat percentiles (msec): 00:21:41.472 | 1.00th=[ 275], 5.00th=[ 443], 10.00th=[ 550], 20.00th=[ 592], 00:21:41.472 | 30.00th=[ 600], 40.00th=[ 617], 50.00th=[ 617], 60.00th=[ 625], 00:21:41.472 | 70.00th=[ 634], 80.00th=[ 642], 90.00th=[ 651], 95.00th=[ 735], 00:21:41.472 | 99.00th=[ 944], 99.50th=[ 969], 99.90th=[ 986], 99.95th=[ 986], 00:21:41.472 | 99.99th=[ 986] 00:21:41.472 bw ( KiB/s): min= 5888, max=12800, per=3.16%, avg=11619.90, stdev=2037.13, samples=10 00:21:41.472 iops : min= 46, max= 100, avg=90.70, stdev=15.89, samples=10 00:21:41.472 lat (msec) : 20=0.28%, 50=46.29%, 100=2.78%, 250=3.06%, 500=3.34% 00:21:41.472 lat (msec) : 750=42.02%, 1000=2.23% 00:21:41.472 cpu : usr=0.30%, sys=0.63%, ctx=575, majf=0, minf=1 00:21:41.472 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.2% 00:21:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.472 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.472 issued rwts: total=562,516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.472 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.472 job20: (groupid=0, jobs=1): err= 0: pid=95609: Tue Jul 23 22:23:13 2024 00:21:41.472 read: IOPS=94, BW=11.8MiB/s (12.4MB/s)(63.0MiB/5341msec) 00:21:41.472 slat (usec): min=8, max=1070, avg=43.69, stdev=80.35 00:21:41.472 clat (msec): min=33, max=364, avg=49.97, stdev=32.92 00:21:41.472 lat (msec): min=33, max=364, avg=50.01, stdev=32.91 00:21:41.472 clat percentiles (msec): 00:21:41.472 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:21:41.472 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.472 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 61], 95.00th=[ 105], 00:21:41.472 | 99.00th=[ 138], 99.50th=[ 347], 99.90th=[ 363], 99.95th=[ 363], 00:21:41.472 | 99.99th=[ 363] 00:21:41.472 bw ( KiB/s): min=10496, max=16640, per=3.50%, avg=12791.40, stdev=1961.54, samples=10 00:21:41.472 iops : min= 82, max= 130, avg=99.70, stdev=15.30, samples=10 00:21:41.472 write: IOPS=96, BW=12.1MiB/s (12.7MB/s)(64.5MiB/5341msec); 0 zone resets 00:21:41.472 slat (usec): min=11, max=1040, avg=48.53, stdev=64.18 00:21:41.472 clat (msec): min=149, max=943, avg=612.79, stdev=91.95 00:21:41.472 lat (msec): min=149, max=943, avg=612.84, stdev=91.95 00:21:41.472 clat percentiles (msec): 00:21:41.472 | 1.00th=[ 253], 5.00th=[ 414], 10.00th=[ 542], 20.00th=[ 600], 00:21:41.472 | 30.00th=[ 609], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.472 | 70.00th=[ 634], 80.00th=[ 642], 90.00th=[ 659], 95.00th=[ 726], 00:21:41.472 | 99.00th=[ 902], 99.50th=[ 919], 99.90th=[ 944], 99.95th=[ 944], 00:21:41.472 | 99.99th=[ 944] 00:21:41.472 bw ( KiB/s): min= 6656, max=12569, per=3.18%, avg=11691.60, stdev=1787.10, samples=10 00:21:41.472 iops : min= 52, max= 98, avg=91.10, stdev=13.87, samples=10 00:21:41.472 lat (msec) : 50=44.22%, 100=2.55%, 250=2.75%, 500=3.82%, 750=44.51% 00:21:41.472 lat (msec) : 1000=2.16% 00:21:41.472 cpu : usr=0.30%, sys=0.66%, ctx=626, majf=0, minf=1 00:21:41.472 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:21:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.472 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.472 issued rwts: total=504,516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.472 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.472 job21: (groupid=0, jobs=1): err= 0: pid=95610: Tue Jul 23 22:23:13 2024 00:21:41.472 read: IOPS=87, BW=10.9MiB/s (11.5MB/s)(58.5MiB/5352msec) 00:21:41.472 slat (usec): min=9, max=591, avg=32.74, stdev=31.10 00:21:41.472 clat (msec): min=37, max=384, avg=50.31, stdev=32.92 00:21:41.472 lat (msec): min=37, max=384, avg=50.35, stdev=32.92 00:21:41.472 clat percentiles (msec): 00:21:41.472 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:21:41.472 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.472 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 66], 95.00th=[ 104], 00:21:41.472 | 99.00th=[ 146], 99.50th=[ 355], 99.90th=[ 384], 99.95th=[ 384], 00:21:41.472 | 99.99th=[ 384] 00:21:41.472 bw ( KiB/s): min= 8448, max=16128, per=3.25%, avg=11876.10, stdev=2477.11, samples=10 00:21:41.472 iops : min= 66, max= 126, avg=92.70, stdev=19.37, samples=10 00:21:41.472 write: IOPS=96, BW=12.1MiB/s (12.6MB/s)(64.5MiB/5352msec); 0 zone resets 00:21:41.472 slat (usec): min=16, max=115, avg=37.75, stdev=15.74 00:21:41.472 clat (msec): min=157, max=932, avg=617.27, stdev=91.33 00:21:41.472 lat (msec): min=157, max=932, avg=617.31, stdev=91.33 00:21:41.472 clat percentiles (msec): 00:21:41.472 | 1.00th=[ 262], 5.00th=[ 426], 10.00th=[ 567], 20.00th=[ 600], 00:21:41.472 | 30.00th=[ 609], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.472 | 70.00th=[ 642], 80.00th=[ 651], 90.00th=[ 667], 95.00th=[ 684], 00:21:41.472 | 99.00th=[ 911], 99.50th=[ 919], 99.90th=[ 936], 99.95th=[ 936], 00:21:41.472 | 99.99th=[ 936] 00:21:41.472 bw ( KiB/s): min= 6400, max=12544, per=3.17%, avg=11671.10, stdev=1865.54, samples=10 00:21:41.472 iops : min= 50, max= 98, avg=91.10, stdev=14.55, samples=10 00:21:41.472 lat (msec) : 50=42.17%, 100=2.74%, 250=2.74%, 500=3.86%, 750=46.34% 00:21:41.472 lat (msec) : 1000=2.13% 00:21:41.472 cpu : usr=0.22%, sys=0.69%, ctx=586, majf=0, minf=1 00:21:41.472 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:21:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.472 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.472 issued rwts: total=468,516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.472 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.472 job22: (groupid=0, jobs=1): err= 0: pid=95611: Tue Jul 23 22:23:13 2024 00:21:41.472 read: IOPS=110, BW=13.8MiB/s (14.5MB/s)(74.0MiB/5343msec) 00:21:41.472 slat (usec): min=8, max=374, avg=28.04, stdev=26.74 00:21:41.472 clat (msec): min=31, max=368, avg=48.70, stdev=28.95 00:21:41.472 lat (msec): min=31, max=368, avg=48.73, stdev=28.95 00:21:41.472 clat percentiles (msec): 00:21:41.472 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:21:41.472 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.472 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 50], 95.00th=[ 102], 00:21:41.473 | 99.00th=[ 140], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 368], 00:21:41.473 | 99.99th=[ 368] 00:21:41.473 bw ( KiB/s): min=11776, max=20264, per=4.12%, avg=15072.60, stdev=2943.00, samples=10 00:21:41.473 iops : min= 92, max= 158, avg=117.50, stdev=22.95, samples=10 00:21:41.473 write: IOPS=96, BW=12.1MiB/s (12.7MB/s)(64.8MiB/5343msec); 0 zone resets 00:21:41.473 slat (usec): min=11, max=388, avg=36.31, stdev=34.34 00:21:41.473 clat (msec): min=148, max=936, avg=603.66, stdev=91.10 00:21:41.473 lat (msec): min=148, max=936, avg=603.70, stdev=91.10 00:21:41.473 clat percentiles (msec): 00:21:41.473 | 1.00th=[ 251], 5.00th=[ 401], 10.00th=[ 542], 20.00th=[ 584], 00:21:41.473 | 30.00th=[ 600], 40.00th=[ 609], 50.00th=[ 617], 60.00th=[ 617], 00:21:41.473 | 70.00th=[ 625], 80.00th=[ 634], 90.00th=[ 659], 95.00th=[ 701], 00:21:41.473 | 99.00th=[ 869], 99.50th=[ 902], 99.90th=[ 936], 99.95th=[ 936], 00:21:41.473 | 99.99th=[ 936] 00:21:41.473 bw ( KiB/s): min= 6669, max=12825, per=3.19%, avg=11718.50, stdev=1798.77, samples=10 00:21:41.473 iops : min= 52, max= 100, avg=91.30, stdev=13.99, samples=10 00:21:41.473 lat (msec) : 50=48.02%, 100=2.61%, 250=2.88%, 500=3.60%, 750=40.99% 00:21:41.473 lat (msec) : 1000=1.89% 00:21:41.473 cpu : usr=0.21%, sys=0.54%, ctx=725, majf=0, minf=1 00:21:41.473 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:21:41.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.473 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.473 issued rwts: total=592,518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.473 job23: (groupid=0, jobs=1): err= 0: pid=95612: Tue Jul 23 22:23:13 2024 00:21:41.473 read: IOPS=91, BW=11.5MiB/s (12.0MB/s)(61.5MiB/5366msec) 00:21:41.473 slat (usec): min=8, max=2881, avg=34.09, stdev=129.37 00:21:41.473 clat (msec): min=4, max=390, avg=49.44, stdev=40.95 00:21:41.473 lat (msec): min=4, max=390, avg=49.48, stdev=40.94 00:21:41.473 clat percentiles (msec): 00:21:41.473 | 1.00th=[ 9], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 41], 00:21:41.473 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:21:41.473 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 50], 95.00th=[ 88], 00:21:41.473 | 99.00th=[ 372], 99.50th=[ 380], 99.90th=[ 393], 99.95th=[ 393], 00:21:41.473 | 99.99th=[ 393] 00:21:41.473 bw ( KiB/s): min= 6387, max=20224, per=3.41%, avg=12463.40, stdev=3772.71, samples=10 00:21:41.473 iops : min= 49, max= 158, avg=97.20, stdev=29.64, samples=10 00:21:41.473 write: IOPS=95, BW=11.9MiB/s (12.5MB/s)(64.0MiB/5366msec); 0 zone resets 00:21:41.473 slat (usec): min=12, max=1794, avg=37.76, stdev=79.18 00:21:41.473 clat (msec): min=105, max=982, avg=621.91, stdev=91.89 00:21:41.473 lat (msec): min=105, max=982, avg=621.95, stdev=91.89 00:21:41.473 clat percentiles (msec): 00:21:41.473 | 1.00th=[ 271], 5.00th=[ 451], 10.00th=[ 575], 20.00th=[ 609], 00:21:41.473 | 30.00th=[ 617], 40.00th=[ 625], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.473 | 70.00th=[ 642], 80.00th=[ 651], 90.00th=[ 676], 95.00th=[ 751], 00:21:41.473 | 99.00th=[ 927], 99.50th=[ 969], 99.90th=[ 986], 99.95th=[ 986], 00:21:41.473 | 99.99th=[ 986] 00:21:41.473 bw ( KiB/s): min= 6144, max=12544, per=3.16%, avg=11617.30, stdev=1944.36, samples=10 00:21:41.473 iops : min= 48, max= 98, avg=90.60, stdev=15.12, samples=10 00:21:41.473 lat (msec) : 10=0.90%, 20=0.60%, 50=42.73%, 100=2.79%, 250=1.99% 00:21:41.473 lat (msec) : 500=3.29%, 750=45.22%, 1000=2.49% 00:21:41.473 cpu : usr=0.24%, sys=0.63%, ctx=582, majf=0, minf=1 00:21:41.473 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:21:41.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.473 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.473 issued rwts: total=492,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.473 job24: (groupid=0, jobs=1): err= 0: pid=95613: Tue Jul 23 22:23:13 2024 00:21:41.473 read: IOPS=102, BW=12.8MiB/s (13.5MB/s)(68.6MiB/5350msec) 00:21:41.473 slat (usec): min=8, max=832, avg=33.69, stdev=41.69 00:21:41.473 clat (msec): min=31, max=354, avg=48.61, stdev=25.11 00:21:41.473 lat (msec): min=31, max=354, avg=48.65, stdev=25.11 00:21:41.473 clat percentiles (msec): 00:21:41.473 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:21:41.473 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.473 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 53], 95.00th=[ 100], 00:21:41.473 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 355], 99.95th=[ 355], 00:21:41.473 | 99.99th=[ 355] 00:21:41.473 bw ( KiB/s): min=11008, max=20224, per=3.83%, avg=14003.60, stdev=2681.98, samples=10 00:21:41.473 iops : min= 86, max= 158, avg=109.10, stdev=20.93, samples=10 00:21:41.473 write: IOPS=96, BW=12.1MiB/s (12.7MB/s)(64.8MiB/5350msec); 0 zone resets 00:21:41.473 slat (usec): min=11, max=445, avg=42.32, stdev=37.13 00:21:41.473 clat (msec): min=155, max=953, avg=608.59, stdev=94.54 00:21:41.473 lat (msec): min=155, max=953, avg=608.63, stdev=94.54 00:21:41.473 clat percentiles (msec): 00:21:41.473 | 1.00th=[ 255], 5.00th=[ 414], 10.00th=[ 535], 20.00th=[ 592], 00:21:41.473 | 30.00th=[ 609], 40.00th=[ 617], 50.00th=[ 617], 60.00th=[ 625], 00:21:41.473 | 70.00th=[ 634], 80.00th=[ 634], 90.00th=[ 651], 95.00th=[ 709], 00:21:41.473 | 99.00th=[ 919], 99.50th=[ 953], 99.90th=[ 953], 99.95th=[ 953], 00:21:41.473 | 99.99th=[ 953] 00:21:41.473 bw ( KiB/s): min= 6400, max=12494, per=3.17%, avg=11651.70, stdev=1852.97, samples=10 00:21:41.473 iops : min= 50, max= 97, avg=90.70, stdev=14.38, samples=10 00:21:41.473 lat (msec) : 50=45.92%, 100=3.00%, 250=2.91%, 500=3.66%, 750=42.55% 00:21:41.473 lat (msec) : 1000=1.97% 00:21:41.473 cpu : usr=0.30%, sys=0.58%, ctx=584, majf=0, minf=1 00:21:41.473 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.1% 00:21:41.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.473 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.473 issued rwts: total=549,518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.473 job25: (groupid=0, jobs=1): err= 0: pid=95614: Tue Jul 23 22:23:13 2024 00:21:41.473 read: IOPS=99, BW=12.4MiB/s (13.0MB/s)(66.8MiB/5375msec) 00:21:41.473 slat (usec): min=8, max=205, avg=26.39, stdev=16.64 00:21:41.473 clat (msec): min=2, max=395, avg=48.03, stdev=35.38 00:21:41.473 lat (msec): min=2, max=395, avg=48.06, stdev=35.38 00:21:41.473 clat percentiles (msec): 00:21:41.473 | 1.00th=[ 5], 5.00th=[ 33], 10.00th=[ 40], 20.00th=[ 41], 00:21:41.473 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.473 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 51], 95.00th=[ 103], 00:21:41.473 | 99.00th=[ 163], 99.50th=[ 388], 99.90th=[ 397], 99.95th=[ 397], 00:21:41.473 | 99.99th=[ 397] 00:21:41.473 bw ( KiB/s): min=11520, max=19968, per=3.70%, avg=13534.80, stdev=2545.19, samples=10 00:21:41.473 iops : min= 90, max= 156, avg=105.50, stdev=20.01, samples=10 00:21:41.473 write: IOPS=96, BW=12.0MiB/s (12.6MB/s)(64.5MiB/5375msec); 0 zone resets 00:21:41.473 slat (usec): min=12, max=597, avg=34.99, stdev=32.39 00:21:41.473 clat (msec): min=11, max=965, avg=616.05, stdev=100.69 00:21:41.473 lat (msec): min=11, max=965, avg=616.09, stdev=100.69 00:21:41.473 clat percentiles (msec): 00:21:41.473 | 1.00th=[ 199], 5.00th=[ 447], 10.00th=[ 575], 20.00th=[ 600], 00:21:41.473 | 30.00th=[ 617], 40.00th=[ 625], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.473 | 70.00th=[ 642], 80.00th=[ 642], 90.00th=[ 659], 95.00th=[ 718], 00:21:41.473 | 99.00th=[ 911], 99.50th=[ 953], 99.90th=[ 969], 99.95th=[ 969], 00:21:41.473 | 99.99th=[ 969] 00:21:41.473 bw ( KiB/s): min= 6656, max=12544, per=3.18%, avg=11691.60, stdev=1782.94, samples=10 00:21:41.473 iops : min= 52, max= 98, avg=91.10, stdev=13.84, samples=10 00:21:41.473 lat (msec) : 4=0.48%, 10=1.24%, 20=0.38%, 50=43.90%, 100=1.90% 00:21:41.473 lat (msec) : 250=3.33%, 500=3.24%, 750=43.33%, 1000=2.19% 00:21:41.473 cpu : usr=0.28%, sys=0.47%, ctx=676, majf=0, minf=1 00:21:41.473 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:21:41.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.473 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.473 issued rwts: total=534,516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.473 job26: (groupid=0, jobs=1): err= 0: pid=95615: Tue Jul 23 22:23:13 2024 00:21:41.473 read: IOPS=90, BW=11.3MiB/s (11.9MB/s)(60.8MiB/5371msec) 00:21:41.473 slat (usec): min=9, max=119, avg=23.73, stdev=10.64 00:21:41.473 clat (msec): min=2, max=403, avg=48.36, stdev=40.79 00:21:41.473 lat (msec): min=2, max=403, avg=48.38, stdev=40.79 00:21:41.473 clat percentiles (msec): 00:21:41.473 | 1.00th=[ 4], 5.00th=[ 31], 10.00th=[ 40], 20.00th=[ 41], 00:21:41.473 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:21:41.473 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 47], 95.00th=[ 84], 00:21:41.473 | 99.00th=[ 376], 99.50th=[ 384], 99.90th=[ 405], 99.95th=[ 405], 00:21:41.473 | 99.99th=[ 405] 00:21:41.473 bw ( KiB/s): min= 9728, max=19712, per=3.36%, avg=12304.00, stdev=2804.74, samples=10 00:21:41.473 iops : min= 76, max= 154, avg=95.80, stdev=22.00, samples=10 00:21:41.473 write: IOPS=96, BW=12.0MiB/s (12.6MB/s)(64.5MiB/5371msec); 0 zone resets 00:21:41.473 slat (usec): min=14, max=106, avg=29.15, stdev= 9.84 00:21:41.473 clat (msec): min=8, max=953, avg=619.71, stdev=105.21 00:21:41.473 lat (msec): min=8, max=953, avg=619.74, stdev=105.21 00:21:41.473 clat percentiles (msec): 00:21:41.473 | 1.00th=[ 153], 5.00th=[ 435], 10.00th=[ 550], 20.00th=[ 609], 00:21:41.473 | 30.00th=[ 617], 40.00th=[ 625], 50.00th=[ 634], 60.00th=[ 634], 00:21:41.473 | 70.00th=[ 642], 80.00th=[ 651], 90.00th=[ 676], 95.00th=[ 735], 00:21:41.473 | 99.00th=[ 936], 99.50th=[ 936], 99.90th=[ 953], 99.95th=[ 953], 00:21:41.473 | 99.99th=[ 953] 00:21:41.473 bw ( KiB/s): min= 6912, max=12544, per=3.19%, avg=11714.60, stdev=1705.94, samples=10 00:21:41.473 iops : min= 54, max= 98, avg=91.20, stdev=13.21, samples=10 00:21:41.473 lat (msec) : 4=0.70%, 10=1.50%, 20=0.10%, 50=42.22%, 100=2.69% 00:21:41.473 lat (msec) : 250=1.70%, 500=3.49%, 750=45.21%, 1000=2.40% 00:21:41.473 cpu : usr=0.28%, sys=0.41%, ctx=576, majf=0, minf=1 00:21:41.474 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:21:41.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.474 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.474 issued rwts: total=486,516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.474 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.474 job27: (groupid=0, jobs=1): err= 0: pid=95616: Tue Jul 23 22:23:13 2024 00:21:41.474 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(61.1MiB/5348msec) 00:21:41.474 slat (usec): min=8, max=628, avg=31.10, stdev=38.86 00:21:41.474 clat (msec): min=31, max=369, avg=53.58, stdev=38.07 00:21:41.474 lat (msec): min=31, max=369, avg=53.61, stdev=38.07 00:21:41.474 clat percentiles (msec): 00:21:41.474 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:21:41.474 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.474 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 88], 95.00th=[ 128], 00:21:41.474 | 99.00th=[ 167], 99.50th=[ 351], 99.90th=[ 372], 99.95th=[ 372], 00:21:41.474 | 99.99th=[ 372] 00:21:41.474 bw ( KiB/s): min= 7936, max=24320, per=3.39%, avg=12390.40, stdev=4601.99, samples=10 00:21:41.474 iops : min= 62, max= 190, avg=96.80, stdev=35.95, samples=10 00:21:41.474 write: IOPS=96, BW=12.0MiB/s (12.6MB/s)(64.4MiB/5348msec); 0 zone resets 00:21:41.474 slat (usec): min=12, max=352, avg=36.70, stdev=32.59 00:21:41.474 clat (msec): min=166, max=938, avg=612.80, stdev=92.54 00:21:41.474 lat (msec): min=166, max=938, avg=612.83, stdev=92.54 00:21:41.474 clat percentiles (msec): 00:21:41.474 | 1.00th=[ 268], 5.00th=[ 430], 10.00th=[ 514], 20.00th=[ 600], 00:21:41.474 | 30.00th=[ 609], 40.00th=[ 625], 50.00th=[ 625], 60.00th=[ 634], 00:21:41.474 | 70.00th=[ 642], 80.00th=[ 651], 90.00th=[ 651], 95.00th=[ 726], 00:21:41.474 | 99.00th=[ 911], 99.50th=[ 919], 99.90th=[ 936], 99.95th=[ 936], 00:21:41.474 | 99.99th=[ 936] 00:21:41.474 bw ( KiB/s): min= 6400, max=12544, per=3.17%, avg=11673.60, stdev=1866.44, samples=10 00:21:41.474 iops : min= 50, max= 98, avg=91.20, stdev=14.58, samples=10 00:21:41.474 lat (msec) : 50=41.33%, 100=3.49%, 250=3.88%, 500=4.18%, 750=45.02% 00:21:41.474 lat (msec) : 1000=2.09% 00:21:41.474 cpu : usr=0.19%, sys=0.47%, ctx=647, majf=0, minf=1 00:21:41.474 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:21:41.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.474 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.474 issued rwts: total=489,515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.474 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.474 job28: (groupid=0, jobs=1): err= 0: pid=95618: Tue Jul 23 22:23:13 2024 00:21:41.474 read: IOPS=105, BW=13.2MiB/s (13.9MB/s)(71.1MiB/5370msec) 00:21:41.474 slat (usec): min=8, max=174, avg=23.60, stdev=15.17 00:21:41.474 clat (msec): min=22, max=391, avg=50.11, stdev=30.90 00:21:41.474 lat (msec): min=22, max=391, avg=50.13, stdev=30.90 00:21:41.474 clat percentiles (msec): 00:21:41.474 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 41], 00:21:41.474 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.474 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 67], 95.00th=[ 103], 00:21:41.474 | 99.00th=[ 165], 99.50th=[ 186], 99.90th=[ 393], 99.95th=[ 393], 00:21:41.474 | 99.99th=[ 393] 00:21:41.474 bw ( KiB/s): min=11264, max=25600, per=3.96%, avg=14509.40, stdev=4142.24, samples=10 00:21:41.474 iops : min= 88, max= 200, avg=113.20, stdev=32.37, samples=10 00:21:41.474 write: IOPS=96, BW=12.0MiB/s (12.6MB/s)(64.5MiB/5370msec); 0 zone resets 00:21:41.474 slat (usec): min=14, max=3082, avg=36.74, stdev=135.54 00:21:41.474 clat (msec): min=167, max=950, avg=609.46, stdev=87.64 00:21:41.474 lat (msec): min=170, max=950, avg=609.50, stdev=87.62 00:21:41.474 clat percentiles (msec): 00:21:41.474 | 1.00th=[ 271], 5.00th=[ 443], 10.00th=[ 527], 20.00th=[ 592], 00:21:41.474 | 30.00th=[ 609], 40.00th=[ 609], 50.00th=[ 617], 60.00th=[ 625], 00:21:41.474 | 70.00th=[ 634], 80.00th=[ 642], 90.00th=[ 651], 95.00th=[ 743], 00:21:41.474 | 99.00th=[ 869], 99.50th=[ 919], 99.90th=[ 953], 99.95th=[ 953], 00:21:41.474 | 99.99th=[ 953] 00:21:41.474 bw ( KiB/s): min= 5888, max=12800, per=3.17%, avg=11642.90, stdev=2039.66, samples=10 00:21:41.474 iops : min= 46, max= 100, avg=90.80, stdev=15.87, samples=10 00:21:41.474 lat (msec) : 50=45.25%, 100=4.33%, 250=3.04%, 500=3.32%, 750=41.84% 00:21:41.474 lat (msec) : 1000=2.21% 00:21:41.474 cpu : usr=0.24%, sys=0.43%, ctx=645, majf=0, minf=1 00:21:41.474 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:21:41.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.474 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.474 issued rwts: total=569,516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.474 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.474 job29: (groupid=0, jobs=1): err= 0: pid=95622: Tue Jul 23 22:23:13 2024 00:21:41.474 read: IOPS=100, BW=12.6MiB/s (13.2MB/s)(67.6MiB/5370msec) 00:21:41.474 slat (usec): min=9, max=255, avg=26.64, stdev=18.62 00:21:41.474 clat (msec): min=8, max=389, avg=50.69, stdev=37.11 00:21:41.474 lat (msec): min=8, max=389, avg=50.72, stdev=37.10 00:21:41.474 clat percentiles (msec): 00:21:41.474 | 1.00th=[ 10], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 41], 00:21:41.474 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:21:41.474 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 68], 95.00th=[ 97], 00:21:41.474 | 99.00th=[ 197], 99.50th=[ 380], 99.90th=[ 388], 99.95th=[ 388], 00:21:41.474 | 99.99th=[ 388] 00:21:41.474 bw ( KiB/s): min= 9728, max=26368, per=3.76%, avg=13772.80, stdev=4846.45, samples=10 00:21:41.474 iops : min= 76, max= 206, avg=107.60, stdev=37.86, samples=10 00:21:41.474 write: IOPS=95, BW=11.9MiB/s (12.5MB/s)(64.1MiB/5370msec); 0 zone resets 00:21:41.474 slat (usec): min=13, max=288, avg=32.90, stdev=22.60 00:21:41.474 clat (msec): min=119, max=968, avg=615.59, stdev=92.10 00:21:41.474 lat (msec): min=119, max=968, avg=615.62, stdev=92.10 00:21:41.474 clat percentiles (msec): 00:21:41.474 | 1.00th=[ 275], 5.00th=[ 460], 10.00th=[ 550], 20.00th=[ 600], 00:21:41.474 | 30.00th=[ 609], 40.00th=[ 617], 50.00th=[ 625], 60.00th=[ 625], 00:21:41.474 | 70.00th=[ 634], 80.00th=[ 642], 90.00th=[ 651], 95.00th=[ 726], 00:21:41.474 | 99.00th=[ 936], 99.50th=[ 944], 99.90th=[ 969], 99.95th=[ 969], 00:21:41.474 | 99.99th=[ 969] 00:21:41.474 bw ( KiB/s): min= 5888, max=12544, per=3.15%, avg=11596.80, stdev=2019.54, samples=10 00:21:41.474 iops : min= 46, max= 98, avg=90.60, stdev=15.78, samples=10 00:21:41.474 lat (msec) : 10=0.57%, 20=0.57%, 50=43.17%, 100=4.65%, 250=2.56% 00:21:41.474 lat (msec) : 500=3.13%, 750=43.17%, 1000=2.18% 00:21:41.474 cpu : usr=0.20%, sys=0.50%, ctx=675, majf=0, minf=1 00:21:41.474 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:21:41.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.474 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:41.474 issued rwts: total=541,513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.474 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:41.474 00:21:41.474 Run status group 0 (all jobs): 00:21:41.474 READ: bw=357MiB/s (375MB/s), 10.7MiB/s-13.8MiB/s (11.2MB/s-14.5MB/s), io=1925MiB (2019MB), run=5341-5387msec 00:21:41.474 WRITE: bw=359MiB/s (377MB/s), 11.9MiB/s-12.1MiB/s (12.5MB/s-12.7MB/s), io=1934MiB (2028MB), run=5341-5387msec 00:21:41.474 00:21:41.474 Disk stats (read/write): 00:21:41.474 sda: ios=482/500, merge=0/0, ticks=22498/302285, in_queue=324783, util=88.14% 00:21:41.474 sdb: ios=507/500, merge=0/0, ticks=22667/301955, in_queue=324622, util=90.16% 00:21:41.474 sdc: ios=579/500, merge=0/0, ticks=25347/299567, in_queue=324914, util=90.28% 00:21:41.474 sdd: ios=526/501, merge=0/0, ticks=22674/303558, in_queue=326232, util=91.15% 00:21:41.474 sde: ios=546/500, merge=0/0, ticks=25143/299299, in_queue=324442, util=91.02% 00:21:41.474 sdg: ios=555/500, merge=0/0, ticks=25072/299391, in_queue=324463, util=90.82% 00:21:41.474 sdf: ios=540/501, merge=0/0, ticks=23791/301897, in_queue=325689, util=91.59% 00:21:41.474 sdh: ios=486/501, merge=0/0, ticks=23096/301661, in_queue=324757, util=91.02% 00:21:41.474 sdi: ios=464/504, merge=0/0, ticks=21404/305962, in_queue=327366, util=91.88% 00:21:41.474 sdj: ios=537/501, merge=0/0, ticks=26730/299566, in_queue=326297, util=92.27% 00:21:41.474 sdk: ios=542/504, merge=0/0, ticks=26087/301280, in_queue=327367, util=92.36% 00:21:41.474 sdl: ios=502/500, merge=0/0, ticks=25387/299069, in_queue=324456, util=92.61% 00:21:41.474 sdm: ios=549/501, merge=0/0, ticks=27019/298193, in_queue=325212, util=92.78% 00:21:41.474 sdn: ios=530/501, merge=0/0, ticks=25500/299116, in_queue=324616, util=92.73% 00:21:41.474 sdo: ios=504/500, merge=0/0, ticks=23999/301539, in_queue=325538, util=93.43% 00:21:41.474 sdp: ios=482/499, merge=0/0, ticks=24022/300467, in_queue=324489, util=93.80% 00:21:41.474 sdq: ios=541/502, merge=0/0, ticks=25867/300803, in_queue=326670, util=94.39% 00:21:41.474 sdr: ios=485/501, merge=0/0, ticks=23487/301872, in_queue=325360, util=94.08% 00:21:41.474 sds: ios=519/501, merge=0/0, ticks=23992/302233, in_queue=326226, util=94.67% 00:21:41.474 sdt: ios=562/500, merge=0/0, ticks=27133/298755, in_queue=325889, util=95.35% 00:21:41.474 sdu: ios=504/500, merge=0/0, ticks=23842/300312, in_queue=324155, util=94.60% 00:21:41.474 sdv: ios=468/500, merge=0/0, ticks=22505/302108, in_queue=324614, util=95.40% 00:21:41.474 sdw: ios=592/500, merge=0/0, ticks=27731/296271, in_queue=324002, util=94.96% 00:21:41.474 sdx: ios=492/502, merge=0/0, ticks=22592/304335, in_queue=326928, util=96.37% 00:21:41.474 sdy: ios=549/500, merge=0/0, ticks=26315/298345, in_queue=324660, util=95.64% 00:21:41.474 sdz: ios=534/505, merge=0/0, ticks=24191/303190, in_queue=327382, util=96.59% 00:21:41.474 sdaa: ios=486/507, merge=0/0, ticks=21733/306148, in_queue=327881, util=97.04% 00:21:41.474 sdab: ios=489/500, merge=0/0, ticks=24852/300061, in_queue=324914, util=96.16% 00:21:41.474 sdac: ios=569/500, merge=0/0, ticks=27743/297823, in_queue=325566, util=96.81% 00:21:41.474 sdad: ios=541/501, merge=0/0, ticks=26305/300345, in_queue=326650, util=97.51% 00:21:41.474 [2024-07-23 22:23:13.124439] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.474 [2024-07-23 22:23:13.127699] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.474 [2024-07-23 22:23:13.131059] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.474 [2024-07-23 22:23:13.134720] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.475 [2024-07-23 22:23:13.137393] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.475 [2024-07-23 22:23:13.139985] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.475 [2024-07-23 22:23:13.142601] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.475 [2024-07-23 22:23:13.145389] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.475 [2024-07-23 22:23:13.148765] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.475 22:23:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 262144 -d 16 -t randwrite -r 10 00:21:41.475 [global] 00:21:41.475 thread=1 00:21:41.475 invalidate=1 00:21:41.475 rw=randwrite 00:21:41.475 time_based=1 00:21:41.475 runtime=10 00:21:41.475 ioengine=libaio 00:21:41.475 direct=1 00:21:41.475 bs=262144 00:21:41.475 iodepth=16 00:21:41.475 norandommap=1 00:21:41.475 numjobs=1 00:21:41.475 00:21:41.475 [job0] 00:21:41.475 filename=/dev/sda 00:21:41.475 [job1] 00:21:41.475 filename=/dev/sdb 00:21:41.475 [job2] 00:21:41.475 filename=/dev/sdc 00:21:41.475 [job3] 00:21:41.475 filename=/dev/sdd 00:21:41.475 [job4] 00:21:41.475 filename=/dev/sde 00:21:41.475 [job5] 00:21:41.475 filename=/dev/sdg 00:21:41.475 [job6] 00:21:41.475 filename=/dev/sdf 00:21:41.475 [job7] 00:21:41.475 filename=/dev/sdh 00:21:41.475 [job8] 00:21:41.475 filename=/dev/sdi 00:21:41.475 [job9] 00:21:41.475 filename=/dev/sdj 00:21:41.475 [job10] 00:21:41.475 filename=/dev/sdk 00:21:41.475 [job11] 00:21:41.475 filename=/dev/sdl 00:21:41.475 [job12] 00:21:41.475 filename=/dev/sdm 00:21:41.475 [job13] 00:21:41.475 filename=/dev/sdn 00:21:41.475 [job14] 00:21:41.475 filename=/dev/sdo 00:21:41.475 [job15] 00:21:41.475 filename=/dev/sdp 00:21:41.475 [job16] 00:21:41.475 filename=/dev/sdq 00:21:41.475 [job17] 00:21:41.475 filename=/dev/sdr 00:21:41.475 [job18] 00:21:41.475 filename=/dev/sds 00:21:41.475 [job19] 00:21:41.475 filename=/dev/sdt 00:21:41.475 [job20] 00:21:41.475 filename=/dev/sdu 00:21:41.475 [job21] 00:21:41.475 filename=/dev/sdv 00:21:41.475 [job22] 00:21:41.475 filename=/dev/sdw 00:21:41.475 [job23] 00:21:41.475 filename=/dev/sdx 00:21:41.475 [job24] 00:21:41.475 filename=/dev/sdy 00:21:41.475 [job25] 00:21:41.475 filename=/dev/sdz 00:21:41.475 [job26] 00:21:41.475 filename=/dev/sdaa 00:21:41.475 [job27] 00:21:41.475 filename=/dev/sdab 00:21:41.475 [job28] 00:21:41.475 filename=/dev/sdac 00:21:41.475 [job29] 00:21:41.475 filename=/dev/sdad 00:21:41.742 queue_depth set to 113 (sda) 00:21:41.742 queue_depth set to 113 (sdb) 00:21:41.742 queue_depth set to 113 (sdc) 00:21:41.742 queue_depth set to 113 (sdd) 00:21:41.742 queue_depth set to 113 (sde) 00:21:41.742 queue_depth set to 113 (sdg) 00:21:41.742 queue_depth set to 113 (sdf) 00:21:41.742 queue_depth set to 113 (sdh) 00:21:41.742 queue_depth set to 113 (sdi) 00:21:41.742 queue_depth set to 113 (sdj) 00:21:41.742 queue_depth set to 113 (sdk) 00:21:41.742 queue_depth set to 113 (sdl) 00:21:41.742 queue_depth set to 113 (sdm) 00:21:41.742 queue_depth set to 113 (sdn) 00:21:41.742 queue_depth set to 113 (sdo) 00:21:41.742 queue_depth set to 113 (sdp) 00:21:41.742 queue_depth set to 113 (sdq) 00:21:41.742 queue_depth set to 113 (sdr) 00:21:41.742 queue_depth set to 113 (sds) 00:21:41.742 queue_depth set to 113 (sdt) 00:21:41.742 queue_depth set to 113 (sdu) 00:21:41.742 queue_depth set to 113 (sdv) 00:21:41.742 queue_depth set to 113 (sdw) 00:21:41.742 queue_depth set to 113 (sdx) 00:21:41.742 queue_depth set to 113 (sdy) 00:21:41.742 queue_depth set to 113 (sdz) 00:21:41.742 queue_depth set to 113 (sdaa) 00:21:41.742 queue_depth set to 113 (sdab) 00:21:41.742 queue_depth set to 113 (sdac) 00:21:41.742 queue_depth set to 113 (sdad) 00:21:42.024 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job11: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job12: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job13: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job14: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job15: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job16: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job17: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job18: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job19: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job20: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job21: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job22: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job23: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job24: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job25: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job26: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job27: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job28: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 job29: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:42.024 fio-3.35 00:21:42.024 Starting 30 threads 00:21:42.024 [2024-07-23 22:23:14.082305] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.086891] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.090828] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.093387] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.095903] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.098403] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.100946] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.103409] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.105974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.108629] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.111100] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.113676] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.116159] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.118583] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.121070] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.024 [2024-07-23 22:23:14.123574] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.025 [2024-07-23 22:23:14.126049] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.025 [2024-07-23 22:23:14.128498] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.025 [2024-07-23 22:23:14.131063] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.025 [2024-07-23 22:23:14.133612] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.025 [2024-07-23 22:23:14.136191] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.025 [2024-07-23 22:23:14.138640] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.025 [2024-07-23 22:23:14.141094] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.025 [2024-07-23 22:23:14.143745] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.025 [2024-07-23 22:23:14.146383] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.025 [2024-07-23 22:23:14.148828] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.025 [2024-07-23 22:23:14.153532] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.025 [2024-07-23 22:23:14.156210] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.025 [2024-07-23 22:23:14.158972] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:42.025 [2024-07-23 22:23:14.161564] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.006949] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.018068] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.022397] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.025485] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.028210] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.030773] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.033357] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.036019] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.038568] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.041347] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.043882] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.046437] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.049169] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.051663] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.054309] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.056879] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 [2024-07-23 22:23:25.059570] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.254 00:21:54.254 job0: (groupid=0, jobs=1): err= 0: pid=96125: Tue Jul 23 22:23:25 2024 00:21:54.254 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10169msec); 0 zone resets 00:21:54.254 slat (usec): min=20, max=353, avg=72.75, stdev=31.83 00:21:54.254 clat (msec): min=18, max=338, avg=186.32, stdev=17.40 00:21:54.254 lat (msec): min=18, max=338, avg=186.39, stdev=17.41 00:21:54.254 clat percentiles (msec): 00:21:54.254 | 1.00th=[ 112], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.254 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.254 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.254 | 99.00th=[ 245], 99.50th=[ 292], 99.90th=[ 338], 99.95th=[ 338], 00:21:54.254 | 99.99th=[ 338] 00:21:54.254 bw ( KiB/s): min=21034, max=22528, per=3.33%, avg=21934.70, stdev=292.39, samples=20 00:21:54.254 iops : min= 82, max= 88, avg=85.55, stdev= 1.19, samples=20 00:21:54.254 lat (msec) : 20=0.11%, 50=0.23%, 100=0.57%, 250=98.17%, 500=0.92% 00:21:54.254 cpu : usr=0.24%, sys=0.45%, ctx=908, majf=0, minf=1 00:21:54.254 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.254 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.254 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.254 job1: (groupid=0, jobs=1): err= 0: pid=96126: Tue Jul 23 22:23:25 2024 00:21:54.254 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10167msec); 0 zone resets 00:21:54.254 slat (usec): min=28, max=220, avg=60.66, stdev=16.41 00:21:54.254 clat (msec): min=18, max=333, avg=186.28, stdev=17.05 00:21:54.254 lat (msec): min=18, max=333, avg=186.34, stdev=17.06 00:21:54.254 clat percentiles (msec): 00:21:54.254 | 1.00th=[ 112], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.254 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.254 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.254 | 99.00th=[ 241], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:21:54.254 | 99.99th=[ 334] 00:21:54.254 bw ( KiB/s): min=20992, max=22528, per=3.33%, avg=21928.20, stdev=298.29, samples=20 00:21:54.254 iops : min= 82, max= 88, avg=85.45, stdev= 1.19, samples=20 00:21:54.254 lat (msec) : 20=0.11%, 50=0.23%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.254 cpu : usr=0.29%, sys=0.44%, ctx=876, majf=0, minf=1 00:21:54.254 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.254 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.254 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.254 job2: (groupid=0, jobs=1): err= 0: pid=96144: Tue Jul 23 22:23:25 2024 00:21:54.254 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10171msec); 0 zone resets 00:21:54.254 slat (usec): min=30, max=243, avg=62.58, stdev=18.65 00:21:54.254 clat (msec): min=19, max=338, avg=186.35, stdev=17.28 00:21:54.254 lat (msec): min=19, max=338, avg=186.41, stdev=17.29 00:21:54.254 clat percentiles (msec): 00:21:54.254 | 1.00th=[ 113], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.254 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.254 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.254 | 99.00th=[ 245], 99.50th=[ 292], 99.90th=[ 338], 99.95th=[ 338], 00:21:54.254 | 99.99th=[ 338] 00:21:54.254 bw ( KiB/s): min=20992, max=22528, per=3.33%, avg=21932.60, stdev=299.27, samples=20 00:21:54.254 iops : min= 82, max= 88, avg=85.55, stdev= 1.19, samples=20 00:21:54.254 lat (msec) : 20=0.11%, 50=0.23%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.254 cpu : usr=0.31%, sys=0.38%, ctx=884, majf=0, minf=1 00:21:54.254 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.254 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.254 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.254 job3: (groupid=0, jobs=1): err= 0: pid=96155: Tue Jul 23 22:23:25 2024 00:21:54.254 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10168msec); 0 zone resets 00:21:54.254 slat (usec): min=22, max=157, avg=57.18, stdev=13.97 00:21:54.254 clat (msec): min=19, max=336, avg=186.32, stdev=17.21 00:21:54.254 lat (msec): min=19, max=336, avg=186.37, stdev=17.21 00:21:54.254 clat percentiles (msec): 00:21:54.254 | 1.00th=[ 113], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.254 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.254 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.254 | 99.00th=[ 243], 99.50th=[ 292], 99.90th=[ 338], 99.95th=[ 338], 00:21:54.254 | 99.99th=[ 338] 00:21:54.254 bw ( KiB/s): min=20992, max=22060, per=3.33%, avg=21934.80, stdev=249.90, samples=20 00:21:54.254 iops : min= 82, max= 86, avg=85.55, stdev= 1.00, samples=20 00:21:54.254 lat (msec) : 20=0.11%, 50=0.23%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.254 cpu : usr=0.29%, sys=0.41%, ctx=876, majf=0, minf=1 00:21:54.254 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.254 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.254 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.254 job4: (groupid=0, jobs=1): err= 0: pid=96163: Tue Jul 23 22:23:25 2024 00:21:54.254 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10165msec); 0 zone resets 00:21:54.254 slat (usec): min=26, max=2699, avg=62.29, stdev=90.67 00:21:54.254 clat (msec): min=24, max=336, avg=186.42, stdev=16.72 00:21:54.254 lat (msec): min=27, max=336, avg=186.48, stdev=16.70 00:21:54.254 clat percentiles (msec): 00:21:54.254 | 1.00th=[ 118], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.254 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.254 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.254 | 99.00th=[ 243], 99.50th=[ 292], 99.90th=[ 338], 99.95th=[ 338], 00:21:54.254 | 99.99th=[ 338] 00:21:54.255 bw ( KiB/s): min=20992, max=22528, per=3.33%, avg=21902.65, stdev=356.60, samples=20 00:21:54.255 iops : min= 82, max= 88, avg=85.35, stdev= 1.46, samples=20 00:21:54.255 lat (msec) : 50=0.34%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.255 cpu : usr=0.32%, sys=0.37%, ctx=878, majf=0, minf=1 00:21:54.255 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 issued rwts: total=0,871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.255 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.255 job5: (groupid=0, jobs=1): err= 0: pid=96170: Tue Jul 23 22:23:25 2024 00:21:54.255 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10169msec); 0 zone resets 00:21:54.255 slat (usec): min=28, max=1949, avg=58.50, stdev=65.44 00:21:54.255 clat (msec): min=12, max=336, avg=186.29, stdev=17.39 00:21:54.255 lat (msec): min=14, max=336, avg=186.35, stdev=17.37 00:21:54.255 clat percentiles (msec): 00:21:54.255 | 1.00th=[ 111], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.255 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.255 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.255 | 99.00th=[ 243], 99.50th=[ 292], 99.90th=[ 338], 99.95th=[ 338], 00:21:54.255 | 99.99th=[ 338] 00:21:54.255 bw ( KiB/s): min=21034, max=22016, per=3.33%, avg=21932.50, stdev=240.04, samples=20 00:21:54.255 iops : min= 82, max= 86, avg=85.50, stdev= 1.00, samples=20 00:21:54.255 lat (msec) : 20=0.11%, 50=0.23%, 100=0.57%, 250=98.17%, 500=0.92% 00:21:54.255 cpu : usr=0.22%, sys=0.45%, ctx=876, majf=0, minf=1 00:21:54.255 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.255 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.255 job6: (groupid=0, jobs=1): err= 0: pid=96171: Tue Jul 23 22:23:25 2024 00:21:54.255 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10168msec); 0 zone resets 00:21:54.255 slat (usec): min=27, max=223, avg=59.11, stdev=16.61 00:21:54.255 clat (msec): min=20, max=333, avg=186.30, stdev=16.85 00:21:54.255 lat (msec): min=20, max=333, avg=186.36, stdev=16.85 00:21:54.255 clat percentiles (msec): 00:21:54.255 | 1.00th=[ 115], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.255 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.255 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.255 | 99.00th=[ 241], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:21:54.255 | 99.99th=[ 334] 00:21:54.255 bw ( KiB/s): min=20992, max=22016, per=3.33%, avg=21928.20, stdev=247.75, samples=20 00:21:54.255 iops : min= 82, max= 86, avg=85.45, stdev= 1.00, samples=20 00:21:54.255 lat (msec) : 50=0.34%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.255 cpu : usr=0.29%, sys=0.38%, ctx=874, majf=0, minf=1 00:21:54.255 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.255 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.255 job7: (groupid=0, jobs=1): err= 0: pid=96193: Tue Jul 23 22:23:25 2024 00:21:54.255 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10169msec); 0 zone resets 00:21:54.255 slat (usec): min=19, max=522, avg=64.99, stdev=29.87 00:21:54.255 clat (msec): min=18, max=338, avg=186.32, stdev=17.40 00:21:54.255 lat (msec): min=18, max=338, avg=186.38, stdev=17.41 00:21:54.255 clat percentiles (msec): 00:21:54.255 | 1.00th=[ 112], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.255 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.255 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.255 | 99.00th=[ 245], 99.50th=[ 292], 99.90th=[ 338], 99.95th=[ 338], 00:21:54.255 | 99.99th=[ 338] 00:21:54.255 bw ( KiB/s): min=21034, max=22528, per=3.33%, avg=21932.50, stdev=291.91, samples=20 00:21:54.255 iops : min= 82, max= 88, avg=85.50, stdev= 1.19, samples=20 00:21:54.255 lat (msec) : 20=0.11%, 50=0.23%, 100=0.57%, 250=98.17%, 500=0.92% 00:21:54.255 cpu : usr=0.24%, sys=0.47%, ctx=924, majf=0, minf=1 00:21:54.255 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.255 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.255 job8: (groupid=0, jobs=1): err= 0: pid=96220: Tue Jul 23 22:23:25 2024 00:21:54.255 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10176msec); 0 zone resets 00:21:54.255 slat (usec): min=24, max=11492, avg=73.36, stdev=387.47 00:21:54.255 clat (msec): min=9, max=342, avg=186.24, stdev=18.43 00:21:54.255 lat (msec): min=20, max=342, avg=186.31, stdev=18.31 00:21:54.255 clat percentiles (msec): 00:21:54.255 | 1.00th=[ 104], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.255 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.255 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.255 | 99.00th=[ 249], 99.50th=[ 296], 99.90th=[ 342], 99.95th=[ 342], 00:21:54.255 | 99.99th=[ 342] 00:21:54.255 bw ( KiB/s): min=20950, max=22528, per=3.33%, avg=21928.35, stdev=310.23, samples=20 00:21:54.255 iops : min= 81, max= 88, avg=85.35, stdev= 1.42, samples=20 00:21:54.255 lat (msec) : 10=0.11%, 50=0.34%, 100=0.46%, 250=98.17%, 500=0.92% 00:21:54.255 cpu : usr=0.34%, sys=0.37%, ctx=879, majf=0, minf=1 00:21:54.255 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.255 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.255 job9: (groupid=0, jobs=1): err= 0: pid=96309: Tue Jul 23 22:23:25 2024 00:21:54.255 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10170msec); 0 zone resets 00:21:54.255 slat (usec): min=26, max=132, avg=55.76, stdev=12.04 00:21:54.255 clat (msec): min=18, max=338, avg=186.34, stdev=17.32 00:21:54.255 lat (msec): min=18, max=338, avg=186.40, stdev=17.32 00:21:54.255 clat percentiles (msec): 00:21:54.255 | 1.00th=[ 113], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.255 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.255 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.255 | 99.00th=[ 245], 99.50th=[ 292], 99.90th=[ 338], 99.95th=[ 338], 00:21:54.255 | 99.99th=[ 338] 00:21:54.255 bw ( KiB/s): min=20992, max=22528, per=3.33%, avg=21930.40, stdev=298.79, samples=20 00:21:54.255 iops : min= 82, max= 88, avg=85.50, stdev= 1.19, samples=20 00:21:54.255 lat (msec) : 20=0.11%, 50=0.23%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.255 cpu : usr=0.28%, sys=0.33%, ctx=874, majf=0, minf=1 00:21:54.255 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.255 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.255 job10: (groupid=0, jobs=1): err= 0: pid=96316: Tue Jul 23 22:23:25 2024 00:21:54.255 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10170msec); 0 zone resets 00:21:54.255 slat (usec): min=21, max=201, avg=50.66, stdev=18.44 00:21:54.255 clat (msec): min=19, max=338, avg=186.35, stdev=17.34 00:21:54.255 lat (msec): min=19, max=338, avg=186.40, stdev=17.34 00:21:54.255 clat percentiles (msec): 00:21:54.255 | 1.00th=[ 113], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.255 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.255 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.255 | 99.00th=[ 245], 99.50th=[ 292], 99.90th=[ 338], 99.95th=[ 338], 00:21:54.255 | 99.99th=[ 338] 00:21:54.255 bw ( KiB/s): min=21034, max=22528, per=3.33%, avg=21934.70, stdev=292.39, samples=20 00:21:54.255 iops : min= 82, max= 88, avg=85.55, stdev= 1.19, samples=20 00:21:54.255 lat (msec) : 20=0.11%, 50=0.23%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.255 cpu : usr=0.28%, sys=0.29%, ctx=913, majf=0, minf=1 00:21:54.255 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.255 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.255 job11: (groupid=0, jobs=1): err= 0: pid=96317: Tue Jul 23 22:23:25 2024 00:21:54.255 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10166msec); 0 zone resets 00:21:54.255 slat (usec): min=27, max=140, avg=58.82, stdev=13.29 00:21:54.255 clat (msec): min=18, max=332, avg=186.27, stdev=16.88 00:21:54.255 lat (msec): min=18, max=332, avg=186.33, stdev=16.88 00:21:54.255 clat percentiles (msec): 00:21:54.255 | 1.00th=[ 114], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.255 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.255 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.255 | 99.00th=[ 239], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:21:54.255 | 99.99th=[ 334] 00:21:54.255 bw ( KiB/s): min=20992, max=22528, per=3.33%, avg=21932.60, stdev=299.61, samples=20 00:21:54.255 iops : min= 82, max= 88, avg=85.50, stdev= 1.19, samples=20 00:21:54.255 lat (msec) : 20=0.11%, 50=0.23%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.255 cpu : usr=0.30%, sys=0.41%, ctx=872, majf=0, minf=1 00:21:54.255 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.255 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.255 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.255 job12: (groupid=0, jobs=1): err= 0: pid=96318: Tue Jul 23 22:23:25 2024 00:21:54.255 write: IOPS=85, BW=21.5MiB/s (22.5MB/s)(218MiB/10160msec); 0 zone resets 00:21:54.256 slat (usec): min=26, max=225, avg=59.40, stdev=14.20 00:21:54.256 clat (msec): min=21, max=325, avg=186.16, stdev=16.25 00:21:54.256 lat (msec): min=21, max=325, avg=186.22, stdev=16.26 00:21:54.256 clat percentiles (msec): 00:21:54.256 | 1.00th=[ 115], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.256 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.256 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.256 | 99.00th=[ 232], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 326], 00:21:54.256 | 99.99th=[ 326] 00:21:54.256 bw ( KiB/s): min=21034, max=22528, per=3.33%, avg=21930.30, stdev=291.42, samples=20 00:21:54.256 iops : min= 82, max= 88, avg=85.45, stdev= 1.19, samples=20 00:21:54.256 lat (msec) : 50=0.34%, 100=0.46%, 250=98.39%, 500=0.80% 00:21:54.256 cpu : usr=0.34%, sys=0.30%, ctx=874, majf=0, minf=1 00:21:54.256 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.256 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.256 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.256 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.256 job13: (groupid=0, jobs=1): err= 0: pid=96319: Tue Jul 23 22:23:25 2024 00:21:54.256 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10169msec); 0 zone resets 00:21:54.256 slat (usec): min=17, max=221, avg=49.39, stdev=14.84 00:21:54.256 clat (msec): min=21, max=334, avg=186.34, stdev=16.85 00:21:54.256 lat (msec): min=21, max=334, avg=186.39, stdev=16.86 00:21:54.256 clat percentiles (msec): 00:21:54.256 | 1.00th=[ 115], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.256 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.256 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.256 | 99.00th=[ 241], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:21:54.256 | 99.99th=[ 334] 00:21:54.256 bw ( KiB/s): min=21034, max=22528, per=3.33%, avg=21936.95, stdev=297.13, samples=20 00:21:54.256 iops : min= 82, max= 88, avg=85.55, stdev= 1.28, samples=20 00:21:54.256 lat (msec) : 50=0.34%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.256 cpu : usr=0.21%, sys=0.37%, ctx=881, majf=0, minf=1 00:21:54.256 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.256 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.256 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.256 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.256 job14: (groupid=0, jobs=1): err= 0: pid=96320: Tue Jul 23 22:23:25 2024 00:21:54.256 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10169msec); 0 zone resets 00:21:54.256 slat (usec): min=25, max=123, avg=58.58, stdev=14.46 00:21:54.256 clat (msec): min=17, max=338, avg=186.32, stdev=17.40 00:21:54.256 lat (msec): min=18, max=338, avg=186.38, stdev=17.40 00:21:54.256 clat percentiles (msec): 00:21:54.256 | 1.00th=[ 112], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.256 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.256 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.256 | 99.00th=[ 245], 99.50th=[ 292], 99.90th=[ 338], 99.95th=[ 338], 00:21:54.256 | 99.99th=[ 338] 00:21:54.256 bw ( KiB/s): min=21034, max=22528, per=3.33%, avg=21932.50, stdev=291.91, samples=20 00:21:54.256 iops : min= 82, max= 88, avg=85.50, stdev= 1.19, samples=20 00:21:54.256 lat (msec) : 20=0.11%, 50=0.23%, 100=0.57%, 250=98.17%, 500=0.92% 00:21:54.256 cpu : usr=0.33%, sys=0.37%, ctx=872, majf=0, minf=1 00:21:54.256 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.256 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.256 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.256 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.256 job15: (groupid=0, jobs=1): err= 0: pid=96321: Tue Jul 23 22:23:25 2024 00:21:54.256 write: IOPS=86, BW=21.6MiB/s (22.6MB/s)(220MiB/10177msec); 0 zone resets 00:21:54.256 slat (usec): min=24, max=187, avg=62.97, stdev=16.17 00:21:54.256 clat (msec): min=5, max=342, avg=185.17, stdev=22.72 00:21:54.256 lat (msec): min=5, max=342, avg=185.24, stdev=22.72 00:21:54.256 clat percentiles (msec): 00:21:54.256 | 1.00th=[ 45], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.256 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.256 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.256 | 99.00th=[ 249], 99.50th=[ 296], 99.90th=[ 342], 99.95th=[ 342], 00:21:54.256 | 99.99th=[ 342] 00:21:54.256 bw ( KiB/s): min=21461, max=24015, per=3.35%, avg=22081.60, stdev=500.31, samples=20 00:21:54.256 iops : min= 83, max= 93, avg=85.95, stdev= 1.93, samples=20 00:21:54.256 lat (msec) : 10=0.46%, 20=0.34%, 50=0.23%, 100=0.46%, 250=97.61% 00:21:54.256 lat (msec) : 500=0.91% 00:21:54.256 cpu : usr=0.34%, sys=0.41%, ctx=881, majf=0, minf=1 00:21:54.256 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.256 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.256 issued rwts: total=0,878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.256 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.256 job16: (groupid=0, jobs=1): err= 0: pid=96322: Tue Jul 23 22:23:25 2024 00:21:54.256 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10172msec); 0 zone resets 00:21:54.256 slat (usec): min=20, max=169, avg=49.90, stdev=16.11 00:21:54.256 clat (msec): min=20, max=338, avg=186.39, stdev=17.18 00:21:54.256 lat (msec): min=20, max=338, avg=186.44, stdev=17.18 00:21:54.256 clat percentiles (msec): 00:21:54.256 | 1.00th=[ 114], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.256 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.256 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.256 | 99.00th=[ 245], 99.50th=[ 292], 99.90th=[ 338], 99.95th=[ 338], 00:21:54.256 | 99.99th=[ 338] 00:21:54.256 bw ( KiB/s): min=21034, max=22528, per=3.33%, avg=21934.70, stdev=292.39, samples=20 00:21:54.256 iops : min= 82, max= 88, avg=85.55, stdev= 1.19, samples=20 00:21:54.256 lat (msec) : 50=0.34%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.256 cpu : usr=0.28%, sys=0.29%, ctx=896, majf=0, minf=1 00:21:54.256 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.256 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.256 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.256 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.256 job17: (groupid=0, jobs=1): err= 0: pid=96323: Tue Jul 23 22:23:25 2024 00:21:54.256 write: IOPS=86, BW=21.6MiB/s (22.6MB/s)(220MiB/10177msec); 0 zone resets 00:21:54.256 slat (usec): min=27, max=227, avg=60.23, stdev=15.16 00:21:54.256 clat (msec): min=9, max=342, avg=185.18, stdev=22.66 00:21:54.256 lat (msec): min=9, max=342, avg=185.24, stdev=22.66 00:21:54.256 clat percentiles (msec): 00:21:54.256 | 1.00th=[ 45], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.256 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.256 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.256 | 99.00th=[ 249], 99.50th=[ 296], 99.90th=[ 342], 99.95th=[ 342], 00:21:54.256 | 99.99th=[ 342] 00:21:54.256 bw ( KiB/s): min=21461, max=24015, per=3.35%, avg=22081.60, stdev=500.31, samples=20 00:21:54.256 iops : min= 83, max= 93, avg=85.95, stdev= 1.93, samples=20 00:21:54.256 lat (msec) : 10=0.34%, 20=0.34%, 50=0.34%, 100=0.46%, 250=97.61% 00:21:54.256 lat (msec) : 500=0.91% 00:21:54.256 cpu : usr=0.33%, sys=0.33%, ctx=887, majf=0, minf=1 00:21:54.256 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.256 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.256 issued rwts: total=0,878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.256 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.256 job18: (groupid=0, jobs=1): err= 0: pid=96324: Tue Jul 23 22:23:25 2024 00:21:54.256 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10167msec); 0 zone resets 00:21:54.256 slat (usec): min=27, max=230, avg=62.74, stdev=16.10 00:21:54.256 clat (msec): min=21, max=332, avg=186.28, stdev=16.69 00:21:54.256 lat (msec): min=21, max=332, avg=186.35, stdev=16.69 00:21:54.256 clat percentiles (msec): 00:21:54.256 | 1.00th=[ 115], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.256 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.256 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.256 | 99.00th=[ 239], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:21:54.256 | 99.99th=[ 334] 00:21:54.256 bw ( KiB/s): min=21034, max=22528, per=3.33%, avg=21930.30, stdev=291.42, samples=20 00:21:54.256 iops : min= 82, max= 88, avg=85.45, stdev= 1.19, samples=20 00:21:54.256 lat (msec) : 50=0.34%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.256 cpu : usr=0.32%, sys=0.39%, ctx=891, majf=0, minf=1 00:21:54.256 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.256 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.256 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.256 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.256 job19: (groupid=0, jobs=1): err= 0: pid=96325: Tue Jul 23 22:23:25 2024 00:21:54.256 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10174msec); 0 zone resets 00:21:54.256 slat (usec): min=26, max=9229, avg=64.54, stdev=311.35 00:21:54.256 clat (msec): min=9, max=342, avg=186.25, stdev=18.42 00:21:54.256 lat (msec): min=18, max=342, avg=186.31, stdev=18.33 00:21:54.256 clat percentiles (msec): 00:21:54.256 | 1.00th=[ 104], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.256 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.256 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.256 | 99.00th=[ 249], 99.50th=[ 296], 99.90th=[ 342], 99.95th=[ 342], 00:21:54.256 | 99.99th=[ 342] 00:21:54.256 bw ( KiB/s): min=21034, max=22528, per=3.33%, avg=21932.55, stdev=296.55, samples=20 00:21:54.257 iops : min= 82, max= 88, avg=85.40, stdev= 1.27, samples=20 00:21:54.257 lat (msec) : 10=0.11%, 50=0.34%, 100=0.46%, 250=98.17%, 500=0.92% 00:21:54.257 cpu : usr=0.25%, sys=0.34%, ctx=921, majf=0, minf=1 00:21:54.257 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.257 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.257 job20: (groupid=0, jobs=1): err= 0: pid=96326: Tue Jul 23 22:23:25 2024 00:21:54.257 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10168msec); 0 zone resets 00:21:54.257 slat (usec): min=24, max=167, avg=53.80, stdev=13.13 00:21:54.257 clat (msec): min=18, max=336, avg=186.32, stdev=17.21 00:21:54.257 lat (msec): min=18, max=336, avg=186.37, stdev=17.22 00:21:54.257 clat percentiles (msec): 00:21:54.257 | 1.00th=[ 112], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.257 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.257 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.257 | 99.00th=[ 243], 99.50th=[ 292], 99.90th=[ 338], 99.95th=[ 338], 00:21:54.257 | 99.99th=[ 338] 00:21:54.257 bw ( KiB/s): min=20992, max=22060, per=3.33%, avg=21934.80, stdev=249.90, samples=20 00:21:54.257 iops : min= 82, max= 86, avg=85.55, stdev= 1.00, samples=20 00:21:54.257 lat (msec) : 20=0.11%, 50=0.23%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.257 cpu : usr=0.36%, sys=0.24%, ctx=877, majf=0, minf=1 00:21:54.257 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.257 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.257 job21: (groupid=0, jobs=1): err= 0: pid=96327: Tue Jul 23 22:23:25 2024 00:21:54.257 write: IOPS=86, BW=21.6MiB/s (22.6MB/s)(220MiB/10177msec); 0 zone resets 00:21:54.257 slat (usec): min=17, max=138, avg=58.52, stdev=14.35 00:21:54.257 clat (msec): min=10, max=344, avg=184.98, stdev=23.44 00:21:54.257 lat (msec): min=10, max=344, avg=185.04, stdev=23.45 00:21:54.257 clat percentiles (msec): 00:21:54.257 | 1.00th=[ 32], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.257 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.257 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.257 | 99.00th=[ 251], 99.50th=[ 296], 99.90th=[ 347], 99.95th=[ 347], 00:21:54.257 | 99.99th=[ 347] 00:21:54.257 bw ( KiB/s): min=21461, max=24576, per=3.36%, avg=22096.40, stdev=616.95, samples=20 00:21:54.257 iops : min= 83, max= 96, avg=85.95, stdev= 2.52, samples=20 00:21:54.257 lat (msec) : 20=0.80%, 50=0.34%, 100=0.46%, 250=97.38%, 500=1.02% 00:21:54.257 cpu : usr=0.28%, sys=0.42%, ctx=877, majf=0, minf=1 00:21:54.257 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 issued rwts: total=0,879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.257 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.257 job22: (groupid=0, jobs=1): err= 0: pid=96328: Tue Jul 23 22:23:25 2024 00:21:54.257 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10171msec); 0 zone resets 00:21:54.257 slat (usec): min=31, max=2287, avg=61.20, stdev=76.53 00:21:54.257 clat (msec): min=21, max=334, avg=186.33, stdev=16.86 00:21:54.257 lat (msec): min=23, max=334, avg=186.39, stdev=16.84 00:21:54.257 clat percentiles (msec): 00:21:54.257 | 1.00th=[ 115], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.257 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.257 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.257 | 99.00th=[ 241], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:21:54.257 | 99.99th=[ 334] 00:21:54.257 bw ( KiB/s): min=20992, max=22528, per=3.33%, avg=21930.45, stdev=302.65, samples=20 00:21:54.257 iops : min= 82, max= 88, avg=85.50, stdev= 1.28, samples=20 00:21:54.257 lat (msec) : 50=0.34%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.257 cpu : usr=0.33%, sys=0.38%, ctx=873, majf=0, minf=1 00:21:54.257 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.257 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.257 job23: (groupid=0, jobs=1): err= 0: pid=96329: Tue Jul 23 22:23:25 2024 00:21:54.257 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10163msec); 0 zone resets 00:21:54.257 slat (usec): min=23, max=411, avg=59.93, stdev=17.92 00:21:54.257 clat (msec): min=24, max=337, avg=186.42, stdev=16.82 00:21:54.257 lat (msec): min=24, max=337, avg=186.48, stdev=16.81 00:21:54.257 clat percentiles (msec): 00:21:54.257 | 1.00th=[ 118], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.257 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.257 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.257 | 99.00th=[ 245], 99.50th=[ 292], 99.90th=[ 338], 99.95th=[ 338], 00:21:54.257 | 99.99th=[ 338] 00:21:54.257 bw ( KiB/s): min=21034, max=22528, per=3.33%, avg=21909.15, stdev=352.47, samples=20 00:21:54.257 iops : min= 82, max= 88, avg=85.40, stdev= 1.47, samples=20 00:21:54.257 lat (msec) : 50=0.34%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.257 cpu : usr=0.32%, sys=0.39%, ctx=874, majf=0, minf=1 00:21:54.257 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 issued rwts: total=0,871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.257 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.257 job24: (groupid=0, jobs=1): err= 0: pid=96330: Tue Jul 23 22:23:25 2024 00:21:54.257 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10168msec); 0 zone resets 00:21:54.257 slat (usec): min=27, max=218, avg=57.56, stdev=17.69 00:21:54.257 clat (msec): min=20, max=333, avg=186.30, stdev=16.87 00:21:54.257 lat (msec): min=20, max=333, avg=186.36, stdev=16.87 00:21:54.257 clat percentiles (msec): 00:21:54.257 | 1.00th=[ 114], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.257 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.257 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.257 | 99.00th=[ 241], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:21:54.257 | 99.99th=[ 334] 00:21:54.257 bw ( KiB/s): min=20992, max=22016, per=3.33%, avg=21928.20, stdev=247.75, samples=20 00:21:54.257 iops : min= 82, max= 86, avg=85.45, stdev= 1.00, samples=20 00:21:54.257 lat (msec) : 50=0.34%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.257 cpu : usr=0.39%, sys=0.30%, ctx=887, majf=0, minf=1 00:21:54.257 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.257 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.257 job25: (groupid=0, jobs=1): err= 0: pid=96331: Tue Jul 23 22:23:25 2024 00:21:54.257 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10167msec); 0 zone resets 00:21:54.257 slat (usec): min=28, max=380, avg=59.56, stdev=18.04 00:21:54.257 clat (msec): min=21, max=332, avg=186.29, stdev=16.78 00:21:54.257 lat (msec): min=21, max=333, avg=186.35, stdev=16.78 00:21:54.257 clat percentiles (msec): 00:21:54.257 | 1.00th=[ 115], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.257 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.257 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.257 | 99.00th=[ 241], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:21:54.257 | 99.99th=[ 334] 00:21:54.257 bw ( KiB/s): min=21034, max=22528, per=3.33%, avg=21930.30, stdev=291.42, samples=20 00:21:54.257 iops : min= 82, max= 88, avg=85.45, stdev= 1.19, samples=20 00:21:54.257 lat (msec) : 50=0.34%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.257 cpu : usr=0.28%, sys=0.43%, ctx=879, majf=0, minf=1 00:21:54.257 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.257 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.257 job26: (groupid=0, jobs=1): err= 0: pid=96332: Tue Jul 23 22:23:25 2024 00:21:54.257 write: IOPS=85, BW=21.5MiB/s (22.5MB/s)(218MiB/10161msec); 0 zone resets 00:21:54.257 slat (usec): min=25, max=291, avg=58.15, stdev=17.24 00:21:54.257 clat (msec): min=21, max=326, avg=186.18, stdev=16.38 00:21:54.257 lat (msec): min=21, max=326, avg=186.24, stdev=16.38 00:21:54.257 clat percentiles (msec): 00:21:54.257 | 1.00th=[ 115], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.257 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.257 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.257 | 99.00th=[ 234], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 326], 00:21:54.257 | 99.99th=[ 326] 00:21:54.257 bw ( KiB/s): min=20992, max=22528, per=3.33%, avg=21928.20, stdev=298.29, samples=20 00:21:54.257 iops : min= 82, max= 88, avg=85.45, stdev= 1.19, samples=20 00:21:54.257 lat (msec) : 50=0.34%, 100=0.46%, 250=98.39%, 500=0.80% 00:21:54.257 cpu : usr=0.32%, sys=0.37%, ctx=877, majf=0, minf=1 00:21:54.257 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.257 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.257 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.257 job27: (groupid=0, jobs=1): err= 0: pid=96333: Tue Jul 23 22:23:25 2024 00:21:54.257 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10167msec); 0 zone resets 00:21:54.257 slat (usec): min=23, max=151, avg=60.94, stdev=15.12 00:21:54.257 clat (msec): min=21, max=332, avg=186.28, stdev=16.73 00:21:54.258 lat (msec): min=21, max=332, avg=186.34, stdev=16.73 00:21:54.258 clat percentiles (msec): 00:21:54.258 | 1.00th=[ 115], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.258 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.258 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.258 | 99.00th=[ 239], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:21:54.258 | 99.99th=[ 334] 00:21:54.258 bw ( KiB/s): min=21034, max=22528, per=3.33%, avg=21930.30, stdev=291.42, samples=20 00:21:54.258 iops : min= 82, max= 88, avg=85.45, stdev= 1.19, samples=20 00:21:54.258 lat (msec) : 50=0.34%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.258 cpu : usr=0.30%, sys=0.45%, ctx=875, majf=0, minf=1 00:21:54.258 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.258 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.258 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.258 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.258 job28: (groupid=0, jobs=1): err= 0: pid=96334: Tue Jul 23 22:23:25 2024 00:21:54.258 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10167msec); 0 zone resets 00:21:54.258 slat (usec): min=25, max=132, avg=58.92, stdev=12.81 00:21:54.258 clat (msec): min=20, max=333, avg=186.30, stdev=16.87 00:21:54.258 lat (msec): min=20, max=333, avg=186.36, stdev=16.87 00:21:54.258 clat percentiles (msec): 00:21:54.258 | 1.00th=[ 114], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.258 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.258 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.258 | 99.00th=[ 241], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:21:54.258 | 99.99th=[ 334] 00:21:54.258 bw ( KiB/s): min=20992, max=22016, per=3.33%, avg=21928.20, stdev=247.75, samples=20 00:21:54.258 iops : min= 82, max= 86, avg=85.45, stdev= 1.00, samples=20 00:21:54.258 lat (msec) : 50=0.34%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.258 cpu : usr=0.40%, sys=0.25%, ctx=874, majf=0, minf=1 00:21:54.258 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.258 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.258 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.258 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.258 job29: (groupid=0, jobs=1): err= 0: pid=96335: Tue Jul 23 22:23:25 2024 00:21:54.258 write: IOPS=85, BW=21.4MiB/s (22.5MB/s)(218MiB/10166msec); 0 zone resets 00:21:54.258 slat (usec): min=25, max=191, avg=48.25, stdev=13.38 00:21:54.258 clat (msec): min=20, max=332, avg=186.28, stdev=16.79 00:21:54.258 lat (msec): min=20, max=332, avg=186.33, stdev=16.79 00:21:54.258 clat percentiles (msec): 00:21:54.258 | 1.00th=[ 114], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:21:54.258 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:21:54.258 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:21:54.258 | 99.00th=[ 239], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:21:54.258 | 99.99th=[ 334] 00:21:54.258 bw ( KiB/s): min=20992, max=22528, per=3.33%, avg=21932.60, stdev=299.61, samples=20 00:21:54.258 iops : min= 82, max= 88, avg=85.50, stdev= 1.19, samples=20 00:21:54.258 lat (msec) : 50=0.34%, 100=0.46%, 250=98.28%, 500=0.92% 00:21:54.258 cpu : usr=0.29%, sys=0.29%, ctx=894, majf=0, minf=1 00:21:54.258 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=98.3%, 32=0.0%, >=64=0.0% 00:21:54.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.258 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.258 issued rwts: total=0,872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.258 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:54.258 00:21:54.258 Run status group 0 (all jobs): 00:21:54.258 WRITE: bw=643MiB/s (674MB/s), 21.4MiB/s-21.6MiB/s (22.5MB/s-22.6MB/s), io=6544MiB (6862MB), run=10160-10177msec 00:21:54.258 00:21:54.258 Disk stats (read/write): 00:21:54.258 sda: ios=48/856, merge=0/0, ticks=51/158406, in_queue=158456, util=93.76% 00:21:54.258 sdb: ios=48/855, merge=0/0, ticks=70/158259, in_queue=158329, util=93.89% 00:21:54.258 sdc: ios=48/856, merge=0/0, ticks=84/158418, in_queue=158502, util=94.50% 00:21:54.258 sdd: ios=48/855, merge=0/0, ticks=106/158249, in_queue=158355, util=94.62% 00:21:54.258 sde: ios=41/854, merge=0/0, ticks=126/158152, in_queue=158279, util=94.81% 00:21:54.258 sdg: ios=28/855, merge=0/0, ticks=109/158221, in_queue=158330, util=94.81% 00:21:54.258 sdf: ios=31/855, merge=0/0, ticks=116/158271, in_queue=158386, util=94.84% 00:21:54.258 sdh: ios=14/856, merge=0/0, ticks=58/158389, in_queue=158448, util=94.97% 00:21:54.258 sdi: ios=0/857, merge=0/0, ticks=0/158465, in_queue=158465, util=95.02% 00:21:54.258 sdj: ios=0/856, merge=0/0, ticks=0/158445, in_queue=158445, util=95.49% 00:21:54.258 sdk: ios=0/856, merge=0/0, ticks=0/158368, in_queue=158368, util=95.63% 00:21:54.258 sdl: ios=0/855, merge=0/0, ticks=0/158271, in_queue=158271, util=95.86% 00:21:54.258 sdm: ios=0/854, merge=0/0, ticks=0/158083, in_queue=158083, util=95.87% 00:21:54.258 sdn: ios=0/855, merge=0/0, ticks=0/158222, in_queue=158222, util=96.20% 00:21:54.258 sdo: ios=0/856, merge=0/0, ticks=0/158415, in_queue=158415, util=96.27% 00:21:54.258 sdp: ios=0/863, merge=0/0, ticks=0/158632, in_queue=158633, util=96.82% 00:21:54.258 sdq: ios=0/856, merge=0/0, ticks=0/158401, in_queue=158400, util=96.89% 00:21:54.258 sdr: ios=0/863, merge=0/0, ticks=0/158647, in_queue=158647, util=97.34% 00:21:54.258 sds: ios=0/855, merge=0/0, ticks=0/158258, in_queue=158257, util=97.27% 00:21:54.258 sdt: ios=0/857, merge=0/0, ticks=0/158434, in_queue=158434, util=97.74% 00:21:54.258 sdu: ios=0/856, merge=0/0, ticks=0/158442, in_queue=158441, util=97.78% 00:21:54.258 sdv: ios=0/864, merge=0/0, ticks=0/158636, in_queue=158635, util=98.28% 00:21:54.258 sdw: ios=0/855, merge=0/0, ticks=0/158277, in_queue=158277, util=98.07% 00:21:54.258 sdx: ios=0/855, merge=0/0, ticks=0/158329, in_queue=158329, util=98.15% 00:21:54.258 sdy: ios=0/855, merge=0/0, ticks=0/158245, in_queue=158245, util=98.19% 00:21:54.258 sdz: ios=0/855, merge=0/0, ticks=0/158270, in_queue=158269, util=98.28% 00:21:54.258 sdaa: ios=0/854, merge=0/0, ticks=0/158092, in_queue=158092, util=98.33% 00:21:54.258 sdab: ios=0/855, merge=0/0, ticks=0/158277, in_queue=158278, util=98.41% 00:21:54.258 sdac: ios=0/855, merge=0/0, ticks=0/158283, in_queue=158282, util=98.56% 00:21:54.258 sdad: ios=0/855, merge=0/0, ticks=0/158213, in_queue=158213, util=98.85% 00:21:54.258 [2024-07-23 22:23:25.066702] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.258 [2024-07-23 22:23:25.069550] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.258 [2024-07-23 22:23:25.072214] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.258 [2024-07-23 22:23:25.074960] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.258 [2024-07-23 22:23:25.077802] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.258 22:23:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@79 -- # sync 00:21:54.258 [2024-07-23 22:23:25.080608] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.258 [2024-07-23 22:23:25.083819] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.258 [2024-07-23 22:23:25.087119] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.258 [2024-07-23 22:23:25.090104] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.258 22:23:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:21:54.258 22:23:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@83 -- # rm -f 00:21:54.258 [2024-07-23 22:23:25.093349] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.258 22:23:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@84 -- # iscsicleanup 00:21:54.258 Cleaning up iSCSI connection 00:21:54.258 22:23:25 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:21:54.258 22:23:25 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:21:54.258 [2024-07-23 22:23:25.096584] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.258 [2024-07-23 22:23:25.100734] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.258 [2024-07-23 22:23:25.104452] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.258 Logging out of session [sid: 41, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:21:54.258 Logging out of session [sid: 42, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:21:54.258 Logging out of session [sid: 43, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:21:54.258 Logging out of session [sid: 44, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:21:54.258 Logging out of session [sid: 45, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:21:54.258 Logging out of session [sid: 46, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:21:54.258 Logging out of session [sid: 47, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:21:54.258 Logging out of session [sid: 48, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:21:54.258 Logging out of session [sid: 49, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 50, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 51, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 52, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 53, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 54, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 55, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 56, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 57, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 58, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 59, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 60, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 61, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 62, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 63, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 64, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 65, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 66, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 67, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 68, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 69, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:21:54.259 Logging out of session [sid: 70, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:21:54.259 Logout of [sid: 41, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 42, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 43, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 44, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 45, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 46, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 47, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 48, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 49, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 50, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 51, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 52, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 53, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 54, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 55, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 56, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 57, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 58, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 59, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 60, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 61, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 62, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 63, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 64, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 65, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 66, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 67, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 68, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 69, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:21:54.259 Logout of [sid: 70, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:21:54.259 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:21:54.259 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@983 -- # rm -rf 00:21:54.259 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@85 -- # remove_backends 00:21:54.259 INFO: Removing lvol bdevs 00:21:54.259 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@22 -- # echo 'INFO: Removing lvol bdevs' 00:21:54.259 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # seq 1 30 00:21:54.259 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:54.259 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_1 00:21:54.259 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_1 00:21:54.259 [2024-07-23 22:23:26.280524] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (d57b26f6-a1c1-4a9c-bf41-28a27b6e25a6) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:54.259 INFO: lvol bdev lvs0/lbd_1 removed 00:21:54.259 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_1 removed' 00:21:54.259 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:54.259 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_2 00:21:54.259 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_2 00:21:54.518 [2024-07-23 22:23:26.552613] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (45b35dd4-5ac9-4c16-9e06-91b7d5cf3ec8) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:54.518 INFO: lvol bdev lvs0/lbd_2 removed 00:21:54.518 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_2 removed' 00:21:54.518 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:54.518 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_3 00:21:54.518 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_3 00:21:54.777 [2024-07-23 22:23:26.780730] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (68cef7a7-760d-4ed2-88a0-d44d83f983d7) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:54.777 INFO: lvol bdev lvs0/lbd_3 removed 00:21:54.777 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_3 removed' 00:21:54.777 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:54.777 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_4 00:21:54.777 22:23:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_4 00:21:55.035 [2024-07-23 22:23:27.032808] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0635f80d-5e30-4420-a80e-17312f7fcaf6) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:55.035 INFO: lvol bdev lvs0/lbd_4 removed 00:21:55.035 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_4 removed' 00:21:55.035 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:55.035 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_5 00:21:55.035 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_5 00:21:55.035 [2024-07-23 22:23:27.208872] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b13efc0c-b74a-4b0a-bb00-1351522f3132) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:55.035 INFO: lvol bdev lvs0/lbd_5 removed 00:21:55.035 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_5 removed' 00:21:55.035 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:55.035 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_6 00:21:55.035 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_6 00:21:55.293 [2024-07-23 22:23:27.384935] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (eb147db9-cff3-4fac-a0cf-059ec2d6b283) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:55.293 INFO: lvol bdev lvs0/lbd_6 removed 00:21:55.293 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_6 removed' 00:21:55.293 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:55.293 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_7 00:21:55.293 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_7 00:21:55.551 [2024-07-23 22:23:27.560977] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (9fd7de83-8bfc-4286-8cec-aef5164ad645) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:55.551 INFO: lvol bdev lvs0/lbd_7 removed 00:21:55.551 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_7 removed' 00:21:55.551 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:55.551 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_8 00:21:55.551 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_8 00:21:55.810 [2024-07-23 22:23:27.749045] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (40ff9e9b-6e3a-4f92-8198-84da550176e6) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:55.810 INFO: lvol bdev lvs0/lbd_8 removed 00:21:55.810 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_8 removed' 00:21:55.810 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:55.810 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_9 00:21:55.810 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_9 00:21:55.810 [2024-07-23 22:23:27.933113] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (e1277939-b712-424f-9398-1de3edc17199) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:55.810 INFO: lvol bdev lvs0/lbd_9 removed 00:21:55.810 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_9 removed' 00:21:55.810 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:55.810 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_10 00:21:55.810 22:23:27 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_10 00:21:56.069 [2024-07-23 22:23:28.117158] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (79a3737e-b2b8-4feb-975f-7e6a3a3d1576) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:56.069 INFO: lvol bdev lvs0/lbd_10 removed 00:21:56.069 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_10 removed' 00:21:56.069 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:56.069 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_11 00:21:56.069 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_11 00:21:56.329 [2024-07-23 22:23:28.313224] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0524d117-4ed5-4996-9a79-e2b802a835fa) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:56.329 INFO: lvol bdev lvs0/lbd_11 removed 00:21:56.329 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_11 removed' 00:21:56.329 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:56.329 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_12 00:21:56.329 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_12 00:21:56.329 [2024-07-23 22:23:28.505420] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (f739f204-d3e1-43e4-a446-01c31da61d54) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:56.588 INFO: lvol bdev lvs0/lbd_12 removed 00:21:56.588 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_12 removed' 00:21:56.588 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:56.588 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_13 00:21:56.588 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_13 00:21:56.589 [2024-07-23 22:23:28.713047] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (7e7d3b54-6644-4e7b-a19a-f29d07ef6d55) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:56.589 INFO: lvol bdev lvs0/lbd_13 removed 00:21:56.589 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_13 removed' 00:21:56.589 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:56.589 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_14 00:21:56.589 22:23:28 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_14 00:21:56.847 [2024-07-23 22:23:28.985112] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (4f4f30ca-5484-4b7f-9ff2-e277c949f9b9) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:56.847 INFO: lvol bdev lvs0/lbd_14 removed 00:21:56.847 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_14 removed' 00:21:56.847 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:56.847 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_15 00:21:56.847 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_15 00:21:57.107 [2024-07-23 22:23:29.173191] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b71f296f-7e52-4a5d-8414-1080175fd536) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:57.107 INFO: lvol bdev lvs0/lbd_15 removed 00:21:57.107 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_15 removed' 00:21:57.107 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:57.107 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_16 00:21:57.107 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_16 00:21:57.366 [2024-07-23 22:23:29.409275] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (6a1b2ed9-c55b-4de9-9d03-eb9ecedfebb7) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:57.366 INFO: lvol bdev lvs0/lbd_16 removed 00:21:57.366 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_16 removed' 00:21:57.366 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:57.366 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_17 00:21:57.366 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_17 00:21:57.625 [2024-07-23 22:23:29.597327] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (15207816-d0a8-4c18-acc4-26c5bc56f7e4) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:57.625 INFO: lvol bdev lvs0/lbd_17 removed 00:21:57.625 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_17 removed' 00:21:57.625 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:57.625 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_18 00:21:57.625 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_18 00:21:57.884 [2024-07-23 22:23:29.841407] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (e2c909e3-4821-42c0-be7e-0302d8e45ae6) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:57.884 INFO: lvol bdev lvs0/lbd_18 removed 00:21:57.884 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_18 removed' 00:21:57.884 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:57.884 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_19 00:21:57.884 22:23:29 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_19 00:21:57.884 [2024-07-23 22:23:30.029460] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (39d4e49b-efb8-4450-8bce-ab20a8cb8d6f) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:57.884 INFO: lvol bdev lvs0/lbd_19 removed 00:21:57.884 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_19 removed' 00:21:57.884 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:57.884 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_20 00:21:57.884 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_20 00:21:58.143 [2024-07-23 22:23:30.205512] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (63e6e119-4f34-4528-a0dd-ad74d29c92ad) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:58.143 INFO: lvol bdev lvs0/lbd_20 removed 00:21:58.143 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_20 removed' 00:21:58.143 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:58.143 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_21 00:21:58.143 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_21 00:21:58.402 [2024-07-23 22:23:30.393576] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (37028589-e95d-4d9e-ae80-e33681c99c5b) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:58.402 INFO: lvol bdev lvs0/lbd_21 removed 00:21:58.402 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_21 removed' 00:21:58.402 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:58.402 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_22 00:21:58.402 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_22 00:21:58.402 [2024-07-23 22:23:30.577622] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (609f1471-9455-4f8b-ac94-efaf87a46881) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:58.402 INFO: lvol bdev lvs0/lbd_22 removed 00:21:58.402 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_22 removed' 00:21:58.402 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:58.402 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_23 00:21:58.402 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_23 00:21:58.661 [2024-07-23 22:23:30.761686] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (bbf3773d-4f6a-4fb3-919b-e58d94e8c9a0) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:58.662 INFO: lvol bdev lvs0/lbd_23 removed 00:21:58.662 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_23 removed' 00:21:58.662 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:58.662 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_24 00:21:58.662 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_24 00:21:58.921 [2024-07-23 22:23:30.953741] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (4a2dfb51-0cce-42ea-932c-fd5148a1979d) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:58.921 INFO: lvol bdev lvs0/lbd_24 removed 00:21:58.921 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_24 removed' 00:21:58.921 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:58.921 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_25 00:21:58.921 22:23:30 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_25 00:21:59.180 [2024-07-23 22:23:31.141799] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (56837505-73fa-4317-a1a1-61f7413bf878) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:59.180 INFO: lvol bdev lvs0/lbd_25 removed 00:21:59.180 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_25 removed' 00:21:59.180 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:59.180 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_26 00:21:59.180 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_26 00:21:59.180 [2024-07-23 22:23:31.321967] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (1e81274b-ca9b-4a61-9962-8bce4736552b) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:59.180 INFO: lvol bdev lvs0/lbd_26 removed 00:21:59.180 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_26 removed' 00:21:59.180 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:59.180 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_27 00:21:59.180 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_27 00:21:59.439 [2024-07-23 22:23:31.498014] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (cf227d73-5d7e-4be6-bb06-eb7362e2cf50) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:59.439 INFO: lvol bdev lvs0/lbd_27 removed 00:21:59.439 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_27 removed' 00:21:59.439 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:59.439 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_28 00:21:59.439 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_28 00:21:59.697 [2024-07-23 22:23:31.674068] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (80b4342c-2882-4637-945b-316a1dcd0374) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:59.697 INFO: lvol bdev lvs0/lbd_28 removed 00:21:59.697 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_28 removed' 00:21:59.698 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:59.698 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_29 00:21:59.698 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_29 00:21:59.698 [2024-07-23 22:23:31.850120] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (5caf8b32-c297-41ce-b94a-06324439d900) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:59.698 INFO: lvol bdev lvs0/lbd_29 removed 00:21:59.698 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_29 removed' 00:21:59.698 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:59.698 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_30 00:21:59.698 22:23:31 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_30 00:21:59.957 [2024-07-23 22:23:32.042180] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (ec158fc7-79eb-475f-8db3-e324cf39f556) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:59.957 INFO: lvol bdev lvs0/lbd_30 removed 00:21:59.957 22:23:32 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_30 removed' 00:21:59.957 22:23:32 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@28 -- # sleep 1 00:22:00.905 INFO: Removing lvol stores 00:22:00.905 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@30 -- # echo 'INFO: Removing lvol stores' 00:22:00.905 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:22:01.175 INFO: lvol store lvs0 removed 00:22:01.175 INFO: Removing NVMe 00:22:01.176 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@32 -- # echo 'INFO: lvol store lvs0 removed' 00:22:01.176 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@34 -- # echo 'INFO: Removing NVMe' 00:22:01.176 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:22:01.434 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@37 -- # return 0 00:22:01.434 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@86 -- # killprocess 94497 00:22:01.434 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 94497 ']' 00:22:01.434 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@952 -- # kill -0 94497 00:22:01.434 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@953 -- # uname 00:22:01.434 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:01.434 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94497 00:22:01.434 killing process with pid 94497 00:22:01.434 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:01.434 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:01.434 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94497' 00:22:01.434 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@967 -- # kill 94497 00:22:01.434 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@972 -- # wait 94497 00:22:01.693 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@87 -- # iscsitestfini 00:22:01.693 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:22:01.693 00:22:01.693 real 0m44.565s 00:22:01.693 user 0m52.170s 00:22:01.693 sys 0m15.116s 00:22:01.693 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:01.693 22:23:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:01.693 ************************************ 00:22:01.693 END TEST iscsi_tgt_multiconnection 00:22:01.693 ************************************ 00:22:01.693 22:23:33 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@46 -- # '[' 1 -eq 1 ']' 00:22:01.693 22:23:33 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@47 -- # run_test iscsi_tgt_ext4test /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test/ext4test.sh 00:22:01.693 22:23:33 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:01.693 22:23:33 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:01.693 22:23:33 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:22:01.693 ************************************ 00:22:01.693 START TEST iscsi_tgt_ext4test 00:22:01.693 ************************************ 00:22:01.693 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test/ext4test.sh 00:22:01.952 * Looking for test storage... 00:22:01.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test 00:22:01.952 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:22:01.952 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:22:01.952 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:22:01.952 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:22:01.952 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:22:01.952 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:22:01.952 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:22:01.952 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@24 -- # iscsitestinit 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@28 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@29 -- # node_base=iqn.2013-06.com.intel.ch.spdk 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@31 -- # timing_enter start_iscsi_tgt 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@34 -- # pid=96865 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@33 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@35 -- # echo 'Process pid: 96865' 00:22:01.953 Process pid: 96865 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@37 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@39 -- # waitforlisten 96865 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@829 -- # '[' -z 96865 ']' 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.953 22:23:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:22:01.953 [2024-07-23 22:23:34.027712] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:22:01.953 [2024-07-23 22:23:34.027817] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96865 ] 00:22:02.212 [2024-07-23 22:23:34.152673] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:02.212 [2024-07-23 22:23:34.169123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.212 [2024-07-23 22:23:34.217548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.212 22:23:34 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.212 22:23:34 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@862 -- # return 0 00:22:02.212 22:23:34 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 4 -b iqn.2013-06.com.intel.ch.spdk 00:22:02.471 22:23:34 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:22:02.471 [2024-07-23 22:23:34.647058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:02.730 22:23:34 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:02.730 22:23:34 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:22:02.989 22:23:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 512 4096 --name Malloc0 00:22:03.248 Malloc0 00:22:03.506 iscsi_tgt is listening. Running tests... 00:22:03.506 22:23:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@44 -- # echo 'iscsi_tgt is listening. Running tests...' 00:22:03.506 22:23:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@46 -- # timing_exit start_iscsi_tgt 00:22:03.506 22:23:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:03.506 22:23:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:22:03.506 22:23:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:22:03.506 22:23:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:22:04.073 22:23:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_create Malloc0 00:22:04.073 true 00:22:04.073 22:23:36 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target0 Target0_alias EE_Malloc0:0 1:2 64 -d 00:22:04.332 22:23:36 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@55 -- # sleep 1 00:22:05.269 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@57 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:22:05.269 10.0.0.1:3260,1 iqn.2013-06.com.intel.ch.spdk:Target0 00:22:05.269 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@58 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:22:05.269 Logging in to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] 00:22:05.269 Login to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:22:05.269 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@59 -- # waitforiscsidevices 1 00:22:05.269 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@116 -- # local num=1 00:22:05.269 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:22:05.269 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:22:05.269 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:22:05.269 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:22:05.269 [2024-07-23 22:23:37.385903] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:05.269 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=1 00:22:05.269 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:22:05.269 Test error injection 00:22:05.269 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@123 -- # return 0 00:22:05.269 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@61 -- # echo 'Test error injection' 00:22:05.269 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_inject_error EE_Malloc0 all failure -n 1000 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # awk '{print $4}' 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # iscsiadm -m session -P 3 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # head -n1 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # grep 'Attached scsi disk' 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # dev=sda 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@65 -- # waitforfile /dev/sda 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1263 -- # local i=0 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1264 -- # '[' '!' -e /dev/sda ']' 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1270 -- # '[' '!' -e /dev/sda ']' 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1274 -- # return 0 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@66 -- # make_filesystem ext4 /dev/sda 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@924 -- # local fstype=ext4 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@926 -- # local i=0 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@927 -- # local force 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@930 -- # force=-F 00:22:05.528 22:23:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:05.528 mke2fs 1.46.5 (30-Dec-2021) 00:22:06.045 Discarding device blocks: 0/131072 done 00:22:06.045 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:06.045 Filesystem UUID: 74af746e-439c-4b7b-a82c-b51339218e42 00:22:06.045 Superblock backups stored on blocks: 00:22:06.045 32768, 98304 00:22:06.045 00:22:06.045 Allocating group tables: 0/4 done 00:22:06.045 Warning: could not erase sector 2: Input/output error 00:22:06.045 Warning: could not read block 0: Input/output error 00:22:06.304 Warning: could not erase sector 0: Input/output error 00:22:06.304 Writing inode tables: 0/4 done 00:22:06.304 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:22:06.304 22:23:38 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 0 -ge 15 ']' 00:22:06.304 22:23:38 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=1 00:22:06.304 [2024-07-23 22:23:38.409386] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:06.304 22:23:38 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:07.240 22:23:39 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:07.240 mke2fs 1.46.5 (30-Dec-2021) 00:22:07.499 Discarding device blocks: 0/131072 done 00:22:07.758 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:07.758 Filesystem UUID: 84dc37bc-fd69-4a01-b216-b196b8fbd5e5 00:22:07.758 Superblock backups stored on blocks: 00:22:07.758 32768, 98304 00:22:07.758 00:22:07.758 Allocating group tables: 0/4Warning: could not erase sector 2: Input/output error 00:22:07.758 done 00:22:07.758 Warning: could not read block 0: Input/output error 00:22:07.758 Warning: could not erase sector 0: Input/output error 00:22:07.758 Writing inode tables: 0/4 done 00:22:08.017 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:22:08.017 22:23:39 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 1 -ge 15 ']' 00:22:08.017 [2024-07-23 22:23:39.977726] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:08.017 22:23:39 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=2 00:22:08.017 22:23:39 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:08.955 22:23:40 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:08.955 mke2fs 1.46.5 (30-Dec-2021) 00:22:09.215 Discarding device blocks: 0/131072 done 00:22:09.215 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:09.215 Filesystem UUID: 3ae87a70-61d9-4671-9c1d-28f23cb0de64 00:22:09.215 Superblock backups stored on blocks: 00:22:09.215 32768, 98304 00:22:09.215 00:22:09.215 Allocating group tables: 0/4 done 00:22:09.215 Warning: could not erase sector 2: Input/output error 00:22:09.215 Warning: could not read block 0: Input/output error 00:22:09.474 Warning: could not erase sector 0: Input/output error 00:22:09.474 Writing inode tables: 0/4 done 00:22:09.474 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:22:09.474 22:23:41 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 2 -ge 15 ']' 00:22:09.474 [2024-07-23 22:23:41.545653] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:09.474 22:23:41 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=3 00:22:09.474 22:23:41 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:10.412 22:23:42 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:10.412 mke2fs 1.46.5 (30-Dec-2021) 00:22:10.672 Discarding device blocks: 0/131072 done 00:22:10.931 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:10.931 Filesystem UUID: b59f335f-869c-4cc7-b810-e19b2a8c5906 00:22:10.931 Superblock backups stored on blocks: 00:22:10.931 32768, 98304 00:22:10.931 00:22:10.931 Allocating group tables: 0/4 done 00:22:10.931 Warning: could not erase sector 2: Input/output error 00:22:10.931 Warning: could not read block 0: Input/output error 00:22:10.931 Warning: could not erase sector 0: Input/output error 00:22:10.931 Writing inode tables: 0/4 done 00:22:11.190 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:22:11.191 [2024-07-23 22:23:43.238090] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:11.191 22:23:43 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 3 -ge 15 ']' 00:22:11.191 22:23:43 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=4 00:22:11.191 22:23:43 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:12.127 22:23:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:12.127 mke2fs 1.46.5 (30-Dec-2021) 00:22:12.386 Discarding device blocks: 0/131072 done 00:22:12.386 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:12.386 Filesystem UUID: f3031b7c-0957-427c-bac6-f4db05015243 00:22:12.386 Superblock backups stored on blocks: 00:22:12.386 32768, 98304 00:22:12.386 00:22:12.386 Allocating group tables: 0/4 done 00:22:12.386 Warning: could not erase sector 2: Input/output error 00:22:12.645 Warning: could not read block 0: Input/output error 00:22:12.645 Warning: could not erase sector 0: Input/output error 00:22:12.645 Writing inode tables: 0/4 done 00:22:12.645 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:22:12.645 22:23:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 4 -ge 15 ']' 00:22:12.645 22:23:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=5 00:22:12.645 22:23:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:14.025 22:23:45 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:14.025 mke2fs 1.46.5 (30-Dec-2021) 00:22:14.025 Discarding device blocks: 0/131072 done 00:22:14.025 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:14.025 Filesystem UUID: 3961246e-2b60-4966-848a-34add180ce12 00:22:14.025 Superblock backups stored on blocks: 00:22:14.025 32768, 98304 00:22:14.025 00:22:14.025 Allocating group tables: 0/4 done 00:22:14.025 Warning: could not erase sector 2: Input/output error 00:22:14.025 Warning: could not read block 0: Input/output error 00:22:14.284 Warning: could not erase sector 0: Input/output error 00:22:14.284 Writing inode tables: 0/4 done 00:22:14.284 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:22:14.284 22:23:46 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 5 -ge 15 ']' 00:22:14.284 22:23:46 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=6 00:22:14.284 22:23:46 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:14.284 [2024-07-23 22:23:46.380643] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:15.242 22:23:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:15.242 mke2fs 1.46.5 (30-Dec-2021) 00:22:15.513 Discarding device blocks: 0/131072 done 00:22:15.513 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:15.513 Filesystem UUID: bcc5eff3-510b-4d19-968b-ef2d1aeedd2f 00:22:15.513 Superblock backups stored on blocks: 00:22:15.513 32768, 98304 00:22:15.513 00:22:15.513 Allocating group tables: 0/4 done 00:22:15.513 Warning: could not erase sector 2: Input/output error 00:22:15.772 Warning: could not read block 0: Input/output error 00:22:15.772 Warning: could not erase sector 0: Input/output error 00:22:15.772 Writing inode tables: 0/4 done 00:22:16.032 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:22:16.032 22:23:48 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 6 -ge 15 ']' 00:22:16.032 22:23:48 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=7 00:22:16.032 [2024-07-23 22:23:48.039106] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:16.032 22:23:48 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:16.970 22:23:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:16.970 mke2fs 1.46.5 (30-Dec-2021) 00:22:17.229 Discarding device blocks: 0/131072 done 00:22:17.229 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:17.229 Filesystem UUID: 2522b96c-f8b8-4a88-b3b6-2ee66e0a91e8 00:22:17.229 Superblock backups stored on blocks: 00:22:17.229 32768, 98304 00:22:17.229 00:22:17.229 Allocating group tables: 0/4 done 00:22:17.229 Warning: could not erase sector 2: Input/output error 00:22:17.489 Warning: could not read block 0: Input/output error 00:22:17.489 Warning: could not erase sector 0: Input/output error 00:22:17.489 Writing inode tables: 0/4 done 00:22:17.489 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:22:17.489 22:23:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 7 -ge 15 ']' 00:22:17.489 22:23:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=8 00:22:17.489 22:23:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:17.489 [2024-07-23 22:23:49.607298] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:18.426 22:23:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:18.426 mke2fs 1.46.5 (30-Dec-2021) 00:22:18.685 Discarding device blocks: 0/131072 done 00:22:18.944 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:18.944 Filesystem UUID: 01e655a1-fe10-4971-b808-e7daf0b4448c 00:22:18.944 Superblock backups stored on blocks: 00:22:18.944 32768, 98304 00:22:18.944 00:22:18.944 Allocating group tables: 0/4Warning: could not erase sector 2: Input/output error 00:22:18.944 done 00:22:18.944 Warning: could not read block 0: Input/output error 00:22:18.944 Warning: could not erase sector 0: Input/output error 00:22:18.944 Writing inode tables: 0/4 done 00:22:19.203 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:22:19.203 22:23:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 8 -ge 15 ']' 00:22:19.203 22:23:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=9 00:22:19.203 22:23:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:19.203 [2024-07-23 22:23:51.177280] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:20.142 22:23:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:20.142 mke2fs 1.46.5 (30-Dec-2021) 00:22:20.401 Discarding device blocks: 0/131072 done 00:22:20.401 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:20.401 Filesystem UUID: 7f64250d-4a93-4493-a6b0-59b172f450d8 00:22:20.401 Superblock backups stored on blocks: 00:22:20.401 32768, 98304 00:22:20.401 00:22:20.401 Allocating group tables: 0/4 done 00:22:20.401 Warning: could not erase sector 2: Input/output error 00:22:20.401 Warning: could not read block 0: Input/output error 00:22:20.660 Writing inode tables: 0/4 done 00:22:20.660 Creating journal (4096 blocks): done 00:22:20.660 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:22:20.660 22:23:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 9 -ge 15 ']' 00:22:20.660 [2024-07-23 22:23:52.721799] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:20.660 22:23:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=10 00:22:20.660 22:23:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:21.597 22:23:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:21.597 mke2fs 1.46.5 (30-Dec-2021) 00:22:21.856 Discarding device blocks: 0/131072 done 00:22:21.856 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:21.856 Filesystem UUID: 2b618a47-51a0-4ec2-b21e-fe4b2ddd309e 00:22:21.856 Superblock backups stored on blocks: 00:22:21.856 32768, 98304 00:22:21.856 00:22:21.856 Allocating group tables: 0/4 done 00:22:21.856 Writing inode tables: 0/4 done 00:22:21.856 Creating journal (4096 blocks): done 00:22:21.856 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:22:21.856 22:23:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 10 -ge 15 ']' 00:22:21.856 22:23:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=11 00:22:21.856 [2024-07-23 22:23:54.023506] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:21.856 22:23:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:23.232 22:23:55 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:23.232 mke2fs 1.46.5 (30-Dec-2021) 00:22:23.232 Discarding device blocks: 0/131072 done 00:22:23.232 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:23.232 Filesystem UUID: 7c644a01-cc3d-4a89-a76e-2f662f78b296 00:22:23.232 Superblock backups stored on blocks: 00:22:23.232 32768, 98304 00:22:23.232 00:22:23.232 Allocating group tables: 0/4 done 00:22:23.232 Writing inode tables: 0/4 done 00:22:23.232 Creating journal (4096 blocks): done 00:22:23.232 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:22:23.232 22:23:55 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 11 -ge 15 ']' 00:22:23.232 [2024-07-23 22:23:55.321413] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:23.232 22:23:55 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=12 00:22:23.232 22:23:55 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:24.166 22:23:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:24.166 mke2fs 1.46.5 (30-Dec-2021) 00:22:24.425 Discarding device blocks: 0/131072 done 00:22:24.425 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:24.425 Filesystem UUID: bbda5dbf-a88c-4fa6-97a5-b7540aaee364 00:22:24.425 Superblock backups stored on blocks: 00:22:24.425 32768, 98304 00:22:24.425 00:22:24.425 Allocating group tables: 0/4 done 00:22:24.425 Writing inode tables: 0/4 done 00:22:24.425 Creating journal (4096 blocks): done 00:22:24.683 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:22:24.683 22:23:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 12 -ge 15 ']' 00:22:24.683 [2024-07-23 22:23:56.621613] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:24.683 22:23:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=13 00:22:24.683 22:23:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:25.619 22:23:57 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:25.619 mke2fs 1.46.5 (30-Dec-2021) 00:22:25.619 Discarding device blocks: 0/131072 done 00:22:25.619 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:25.619 Filesystem UUID: dc1d0c46-ac50-4650-a8fb-4c323ae759bd 00:22:25.619 Superblock backups stored on blocks: 00:22:25.619 32768, 98304 00:22:25.619 00:22:25.619 Allocating group tables: 0/4 done 00:22:25.619 Writing inode tables: 0/4 done 00:22:25.877 Creating journal (4096 blocks): done 00:22:25.877 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:22:25.877 [2024-07-23 22:23:57.896941] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:25.877 22:23:57 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 13 -ge 15 ']' 00:22:25.877 22:23:57 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=14 00:22:25.877 22:23:57 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:26.814 22:23:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:26.814 mke2fs 1.46.5 (30-Dec-2021) 00:22:27.073 Discarding device blocks: 0/131072 done 00:22:27.073 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:27.073 Filesystem UUID: b82d3a9a-29f6-450e-a784-d2409de0768e 00:22:27.073 Superblock backups stored on blocks: 00:22:27.073 32768, 98304 00:22:27.073 00:22:27.073 Allocating group tables: 0/4 done 00:22:27.073 Writing inode tables: 0/4 done 00:22:27.073 Creating journal (4096 blocks): done 00:22:27.073 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:22:27.073 22:23:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 14 -ge 15 ']' 00:22:27.073 22:23:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=15 00:22:27.073 [2024-07-23 22:23:59.201423] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:27.073 22:23:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:28.012 22:24:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:28.296 mke2fs 1.46.5 (30-Dec-2021) 00:22:28.296 Discarding device blocks: 0/131072 done 00:22:28.296 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:28.296 Filesystem UUID: 4ea0a575-15d2-41a7-8c50-d356afbc94a7 00:22:28.296 Superblock backups stored on blocks: 00:22:28.296 32768, 98304 00:22:28.296 00:22:28.296 Allocating group tables: 0/4 done 00:22:28.296 Writing inode tables: 0/4 done 00:22:28.296 Creating journal (4096 blocks): done 00:22:28.565 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:22:28.565 22:24:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 15 -ge 15 ']' 00:22:28.565 [2024-07-23 22:24:00.495843] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:28.565 22:24:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # return 1 00:22:28.565 mkfs failed as expected 00:22:28.565 Cleaning up iSCSI connection 00:22:28.565 22:24:00 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@70 -- # echo 'mkfs failed as expected' 00:22:28.565 22:24:00 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@73 -- # iscsicleanup 00:22:28.565 22:24:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:22:28.565 22:24:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:22:28.565 Logging out of session [sid: 71, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] 00:22:28.565 Logout of [sid: 71, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:22:28.565 22:24:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:22:28.565 22:24:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@983 -- # rm -rf 00:22:28.565 22:24:00 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_inject_error EE_Malloc0 clear failure 00:22:28.824 22:24:00 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2013-06.com.intel.ch.spdk:Target0 00:22:29.083 Error injection test done 00:22:29.083 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@76 -- # echo 'Error injection test done' 00:22:29.083 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@78 -- # get_bdev_size Nvme0n1 00:22:29.083 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1376 -- # local bdev_name=Nvme0n1 00:22:29.083 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1377 -- # local bdev_info 00:22:29.083 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1378 -- # local bs 00:22:29.083 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1379 -- # local nb 00:22:29.083 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 00:22:29.343 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:22:29.343 { 00:22:29.343 "name": "Nvme0n1", 00:22:29.343 "aliases": [ 00:22:29.343 "90b3bfca-7fa5-4bf0-92ff-67e8ab747909" 00:22:29.343 ], 00:22:29.343 "product_name": "NVMe disk", 00:22:29.343 "block_size": 4096, 00:22:29.343 "num_blocks": 1310720, 00:22:29.343 "uuid": "90b3bfca-7fa5-4bf0-92ff-67e8ab747909", 00:22:29.343 "assigned_rate_limits": { 00:22:29.343 "rw_ios_per_sec": 0, 00:22:29.343 "rw_mbytes_per_sec": 0, 00:22:29.343 "r_mbytes_per_sec": 0, 00:22:29.343 "w_mbytes_per_sec": 0 00:22:29.343 }, 00:22:29.343 "claimed": false, 00:22:29.343 "zoned": false, 00:22:29.343 "supported_io_types": { 00:22:29.343 "read": true, 00:22:29.343 "write": true, 00:22:29.343 "unmap": true, 00:22:29.343 "flush": true, 00:22:29.343 "reset": true, 00:22:29.343 "nvme_admin": true, 00:22:29.343 "nvme_io": true, 00:22:29.343 "nvme_io_md": false, 00:22:29.343 "write_zeroes": true, 00:22:29.343 "zcopy": false, 00:22:29.343 "get_zone_info": false, 00:22:29.343 "zone_management": false, 00:22:29.343 "zone_append": false, 00:22:29.343 "compare": true, 00:22:29.343 "compare_and_write": false, 00:22:29.343 "abort": true, 00:22:29.343 "seek_hole": false, 00:22:29.343 "seek_data": false, 00:22:29.343 "copy": true, 00:22:29.343 "nvme_iov_md": false 00:22:29.343 }, 00:22:29.343 "driver_specific": { 00:22:29.343 "nvme": [ 00:22:29.343 { 00:22:29.343 "pci_address": "0000:00:10.0", 00:22:29.343 "trid": { 00:22:29.343 "trtype": "PCIe", 00:22:29.343 "traddr": "0000:00:10.0" 00:22:29.343 }, 00:22:29.343 "ctrlr_data": { 00:22:29.343 "cntlid": 0, 00:22:29.343 "vendor_id": "0x1b36", 00:22:29.343 "model_number": "QEMU NVMe Ctrl", 00:22:29.343 "serial_number": "12340", 00:22:29.343 "firmware_revision": "8.0.0", 00:22:29.343 "subnqn": "nqn.2019-08.org.qemu:12340", 00:22:29.343 "oacs": { 00:22:29.343 "security": 0, 00:22:29.343 "format": 1, 00:22:29.343 "firmware": 0, 00:22:29.343 "ns_manage": 1 00:22:29.343 }, 00:22:29.343 "multi_ctrlr": false, 00:22:29.343 "ana_reporting": false 00:22:29.343 }, 00:22:29.343 "vs": { 00:22:29.343 "nvme_version": "1.4" 00:22:29.343 }, 00:22:29.343 "ns_data": { 00:22:29.343 "id": 1, 00:22:29.343 "can_share": false 00:22:29.343 } 00:22:29.343 } 00:22:29.343 ], 00:22:29.343 "mp_policy": "active_passive" 00:22:29.343 } 00:22:29.343 } 00:22:29.343 ]' 00:22:29.343 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:22:29.343 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1381 -- # bs=4096 00:22:29.343 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:22:29.343 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1382 -- # nb=1310720 00:22:29.343 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1385 -- # bdev_size=5120 00:22:29.343 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1386 -- # echo 5120 00:22:29.343 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@78 -- # bdev_size=5120 00:22:29.343 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@79 -- # split_size=2560 00:22:29.343 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@80 -- # split_size=2560 00:22:29.343 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create Nvme0n1 2 -s 2560 00:22:29.603 Nvme0n1p0 Nvme0n1p1 00:22:29.603 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias Nvme0n1p0:0 1:2 64 -d 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@84 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:22:29.862 10.0.0.1:3260,1 iqn.2013-06.com.intel.ch.spdk:Target1 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@85 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:22:29.862 Logging in to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] 00:22:29.862 Login to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@86 -- # waitforiscsidevices 1 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@116 -- # local num=1 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:22:29.862 [2024-07-23 22:24:01.914241] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=1 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@123 -- # return 0 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # iscsiadm -m session -P 3 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # head -n1 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # grep 'Attached scsi disk' 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # awk '{print $4}' 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # dev=sda 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@89 -- # waitforfile /dev/sda 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1263 -- # local i=0 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1264 -- # '[' '!' -e /dev/sda ']' 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1270 -- # '[' '!' -e /dev/sda ']' 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1274 -- # return 0 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@91 -- # make_filesystem ext4 /dev/sda 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@924 -- # local fstype=ext4 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@926 -- # local i=0 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@927 -- # local force 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@930 -- # force=-F 00:22:29.862 22:24:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:29.862 mke2fs 1.46.5 (30-Dec-2021) 00:22:29.862 Discarding device blocks: 0/655360 done 00:22:29.862 Creating filesystem with 655360 4k blocks and 163840 inodes 00:22:29.862 Filesystem UUID: 3a9b8482-b542-47bd-97ca-ea94f06df012 00:22:29.862 Superblock backups stored on blocks: 00:22:29.862 32768, 98304, 163840, 229376, 294912 00:22:29.862 00:22:29.862 Allocating group tables: 0/20 done 00:22:29.862 Writing inode tables: 0/20 done 00:22:30.121 Creating journal (16384 blocks): done 00:22:30.121 Writing superblocks and filesystem accounting information: 0/20 done 00:22:30.121 00:22:30.121 22:24:02 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@943 -- # return 0 00:22:30.121 22:24:02 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@92 -- # mkdir -p /mnt/sdadir 00:22:30.121 [2024-07-23 22:24:02.316833] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:30.380 22:24:02 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@93 -- # mount -o sync /dev/sda /mnt/sdadir 00:22:30.380 22:24:02 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@95 -- # rsync -qav --exclude=.git '--exclude=*.o' /home/vagrant/spdk_repo/spdk/ /mnt/sdadir/spdk 00:23:38.084 22:25:02 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@97 -- # make -C /mnt/sdadir/spdk clean 00:23:38.084 make: Entering directory '/mnt/sdadir/spdk' 00:24:24.764 make[1]: Nothing to be done for 'clean'. 00:24:25.331 make: Leaving directory '/mnt/sdadir/spdk' 00:24:25.331 22:25:57 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@98 -- # cd /mnt/sdadir/spdk 00:24:25.331 22:25:57 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@98 -- # ./configure --disable-unit-tests --disable-tests 00:24:25.590 Using default SPDK env in /mnt/sdadir/spdk/lib/env_dpdk 00:24:25.590 Using default DPDK in /mnt/sdadir/spdk/dpdk/build 00:24:48.094 Configuring ISA-L (logfile: /mnt/sdadir/spdk/.spdk-isal.log)...done. 00:25:10.054 Configuring ISA-L-crypto (logfile: /mnt/sdadir/spdk/.spdk-isal-crypto.log)...done. 00:25:10.054 Creating mk/config.mk...done. 00:25:10.054 Creating mk/cc.flags.mk...done. 00:25:10.054 Type 'make' to build. 00:25:10.054 22:26:39 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@99 -- # make -C /mnt/sdadir/spdk -j 00:25:10.054 make: Entering directory '/mnt/sdadir/spdk' 00:25:10.054 make[1]: Nothing to be done for 'all'. 00:25:28.148 The Meson build system 00:25:28.148 Version: 1.3.1 00:25:28.148 Source dir: /mnt/sdadir/spdk/dpdk 00:25:28.148 Build dir: /mnt/sdadir/spdk/dpdk/build-tmp 00:25:28.148 Build type: native build 00:25:28.148 Program cat found: YES (/usr/bin/cat) 00:25:28.148 Project name: DPDK 00:25:28.148 Project version: 24.03.0 00:25:28.148 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:25:28.148 C linker for the host machine: cc ld.bfd 2.39-16 00:25:28.148 Host machine cpu family: x86_64 00:25:28.148 Host machine cpu: x86_64 00:25:28.148 Program pkg-config found: YES (/usr/bin/pkg-config) 00:25:28.148 Program check-symbols.sh found: YES (/mnt/sdadir/spdk/dpdk/buildtools/check-symbols.sh) 00:25:28.149 Program options-ibverbs-static.sh found: YES (/mnt/sdadir/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:25:28.149 Program python3 found: YES (/usr/bin/python3) 00:25:28.149 Program cat found: YES (/usr/bin/cat) 00:25:28.149 Compiler for C supports arguments -march=native: YES 00:25:28.149 Checking for size of "void *" : 8 00:25:28.149 Checking for size of "void *" : 8 (cached) 00:25:28.149 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:25:28.149 Library m found: YES 00:25:28.149 Library numa found: YES 00:25:28.149 Has header "numaif.h" : YES 00:25:28.149 Library fdt found: NO 00:25:28.149 Library execinfo found: NO 00:25:28.149 Has header "execinfo.h" : YES 00:25:28.149 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:25:28.149 Run-time dependency libarchive found: NO (tried pkgconfig) 00:25:28.149 Run-time dependency libbsd found: NO (tried pkgconfig) 00:25:28.149 Run-time dependency jansson found: NO (tried pkgconfig) 00:25:28.149 Run-time dependency openssl found: YES 3.0.9 00:25:28.149 Run-time dependency libpcap found: YES 1.10.4 00:25:28.149 Has header "pcap.h" with dependency libpcap: YES 00:25:28.149 Compiler for C supports arguments -Wcast-qual: YES 00:25:28.149 Compiler for C supports arguments -Wdeprecated: YES 00:25:28.149 Compiler for C supports arguments -Wformat: YES 00:25:28.149 Compiler for C supports arguments -Wformat-nonliteral: YES 00:25:28.149 Compiler for C supports arguments -Wformat-security: YES 00:25:28.149 Compiler for C supports arguments -Wmissing-declarations: YES 00:25:28.149 Compiler for C supports arguments -Wmissing-prototypes: YES 00:25:28.149 Compiler for C supports arguments -Wnested-externs: YES 00:25:28.149 Compiler for C supports arguments -Wold-style-definition: YES 00:25:28.149 Compiler for C supports arguments -Wpointer-arith: YES 00:25:28.149 Compiler for C supports arguments -Wsign-compare: YES 00:25:28.149 Compiler for C supports arguments -Wstrict-prototypes: YES 00:25:28.149 Compiler for C supports arguments -Wundef: YES 00:25:28.149 Compiler for C supports arguments -Wwrite-strings: YES 00:25:28.149 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:25:28.149 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:25:28.149 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:25:28.149 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:25:28.149 Program objdump found: YES (/usr/bin/objdump) 00:25:28.149 Compiler for C supports arguments -mavx512f: YES 00:25:28.149 Checking if "AVX512 checking" compiles: YES 00:25:28.149 Fetching value of define "__SSE4_2__" : 1 00:25:28.149 Fetching value of define "__AES__" : 1 00:25:28.149 Fetching value of define "__AVX__" : 1 00:25:28.149 Fetching value of define "__AVX2__" : 1 00:25:28.149 Fetching value of define "__AVX512BW__" : 1 00:25:28.149 Fetching value of define "__AVX512CD__" : 1 00:25:28.149 Fetching value of define "__AVX512DQ__" : 1 00:25:28.149 Fetching value of define "__AVX512F__" : 1 00:25:28.149 Fetching value of define "__AVX512VL__" : 1 00:25:28.149 Fetching value of define "__PCLMUL__" : 1 00:25:28.149 Fetching value of define "__RDRND__" : 1 00:25:28.149 Fetching value of define "__RDSEED__" : 1 00:25:28.149 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:25:28.149 Fetching value of define "__znver1__" : (undefined) 00:25:28.149 Fetching value of define "__znver2__" : (undefined) 00:25:28.149 Fetching value of define "__znver3__" : (undefined) 00:25:28.149 Fetching value of define "__znver4__" : (undefined) 00:25:28.149 Compiler for C supports arguments -Wno-format-truncation: YES 00:25:28.149 Checking for function "getentropy" : NO 00:25:28.149 Fetching value of define "__PCLMUL__" : 1 (cached) 00:25:28.149 Fetching value of define "__AVX512F__" : 1 (cached) 00:25:28.149 Fetching value of define "__AVX512BW__" : 1 (cached) 00:25:28.149 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:25:28.149 Fetching value of define "__AVX512VL__" : 1 (cached) 00:25:28.149 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:25:28.149 Compiler for C supports arguments -mpclmul: YES 00:25:28.149 Compiler for C supports arguments -maes: YES 00:25:28.149 Compiler for C supports arguments -mavx512f: YES (cached) 00:25:28.149 Compiler for C supports arguments -mavx512bw: YES 00:25:28.149 Compiler for C supports arguments -mavx512dq: YES 00:25:28.149 Compiler for C supports arguments -mavx512vl: YES 00:25:28.149 Compiler for C supports arguments -mvpclmulqdq: YES 00:25:28.149 Compiler for C supports arguments -mavx2: YES 00:25:28.149 Compiler for C supports arguments -mavx: YES 00:25:28.149 Compiler for C supports arguments -Wno-cast-qual: YES 00:25:28.149 Has header "linux/userfaultfd.h" : YES 00:25:28.149 Has header "linux/vduse.h" : YES 00:25:28.149 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:25:28.149 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:25:28.149 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:25:28.149 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:25:28.149 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:25:28.149 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:25:28.149 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:25:28.149 Program doxygen found: YES (/usr/bin/doxygen) 00:25:28.149 Configuring doxy-api-html.conf using configuration 00:25:28.149 Configuring doxy-api-man.conf using configuration 00:25:28.149 Program mandb found: YES (/usr/bin/mandb) 00:25:28.149 Program sphinx-build found: NO 00:25:28.149 Configuring rte_build_config.h using configuration 00:25:28.149 Message: 00:25:28.149 ================= 00:25:28.149 Applications Enabled 00:25:28.149 ================= 00:25:28.149 00:25:28.149 apps: 00:25:28.149 00:25:28.149 00:25:28.149 Message: 00:25:28.149 ================= 00:25:28.149 Libraries Enabled 00:25:28.149 ================= 00:25:28.149 00:25:28.149 libs: 00:25:28.149 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:25:28.149 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:25:28.149 cryptodev, dmadev, power, reorder, security, vhost, 00:25:28.149 00:25:28.149 Message: 00:25:28.149 =============== 00:25:28.149 Drivers Enabled 00:25:28.149 =============== 00:25:28.149 00:25:28.149 common: 00:25:28.149 00:25:28.149 bus: 00:25:28.149 pci, vdev, 00:25:28.149 mempool: 00:25:28.149 ring, 00:25:28.149 dma: 00:25:28.149 00:25:28.149 net: 00:25:28.149 00:25:28.149 crypto: 00:25:28.149 00:25:28.149 compress: 00:25:28.149 00:25:28.149 vdpa: 00:25:28.149 00:25:28.149 00:25:28.149 Message: 00:25:28.149 ================= 00:25:28.149 Content Skipped 00:25:28.149 ================= 00:25:28.149 00:25:28.149 apps: 00:25:28.149 dumpcap: explicitly disabled via build config 00:25:28.149 graph: explicitly disabled via build config 00:25:28.149 pdump: explicitly disabled via build config 00:25:28.149 proc-info: explicitly disabled via build config 00:25:28.149 test-acl: explicitly disabled via build config 00:25:28.149 test-bbdev: explicitly disabled via build config 00:25:28.149 test-cmdline: explicitly disabled via build config 00:25:28.149 test-compress-perf: explicitly disabled via build config 00:25:28.149 test-crypto-perf: explicitly disabled via build config 00:25:28.149 test-dma-perf: explicitly disabled via build config 00:25:28.149 test-eventdev: explicitly disabled via build config 00:25:28.149 test-fib: explicitly disabled via build config 00:25:28.149 test-flow-perf: explicitly disabled via build config 00:25:28.149 test-gpudev: explicitly disabled via build config 00:25:28.149 test-mldev: explicitly disabled via build config 00:25:28.149 test-pipeline: explicitly disabled via build config 00:25:28.149 test-pmd: explicitly disabled via build config 00:25:28.149 test-regex: explicitly disabled via build config 00:25:28.149 test-sad: explicitly disabled via build config 00:25:28.149 test-security-perf: explicitly disabled via build config 00:25:28.149 00:25:28.149 libs: 00:25:28.149 argparse: explicitly disabled via build config 00:25:28.149 metrics: explicitly disabled via build config 00:25:28.149 acl: explicitly disabled via build config 00:25:28.149 bbdev: explicitly disabled via build config 00:25:28.149 bitratestats: explicitly disabled via build config 00:25:28.149 bpf: explicitly disabled via build config 00:25:28.149 cfgfile: explicitly disabled via build config 00:25:28.149 distributor: explicitly disabled via build config 00:25:28.149 efd: explicitly disabled via build config 00:25:28.149 eventdev: explicitly disabled via build config 00:25:28.149 dispatcher: explicitly disabled via build config 00:25:28.149 gpudev: explicitly disabled via build config 00:25:28.149 gro: explicitly disabled via build config 00:25:28.149 gso: explicitly disabled via build config 00:25:28.149 ip_frag: explicitly disabled via build config 00:25:28.149 jobstats: explicitly disabled via build config 00:25:28.149 latencystats: explicitly disabled via build config 00:25:28.149 lpm: explicitly disabled via build config 00:25:28.149 member: explicitly disabled via build config 00:25:28.149 pcapng: explicitly disabled via build config 00:25:28.149 rawdev: explicitly disabled via build config 00:25:28.149 regexdev: explicitly disabled via build config 00:25:28.149 mldev: explicitly disabled via build config 00:25:28.149 rib: explicitly disabled via build config 00:25:28.149 sched: explicitly disabled via build config 00:25:28.149 stack: explicitly disabled via build config 00:25:28.149 ipsec: explicitly disabled via build config 00:25:28.149 pdcp: explicitly disabled via build config 00:25:28.149 fib: explicitly disabled via build config 00:25:28.149 port: explicitly disabled via build config 00:25:28.149 pdump: explicitly disabled via build config 00:25:28.149 table: explicitly disabled via build config 00:25:28.149 pipeline: explicitly disabled via build config 00:25:28.149 graph: explicitly disabled via build config 00:25:28.149 node: explicitly disabled via build config 00:25:28.149 00:25:28.150 drivers: 00:25:28.150 common/cpt: not in enabled drivers build config 00:25:28.150 common/dpaax: not in enabled drivers build config 00:25:28.150 common/iavf: not in enabled drivers build config 00:25:28.150 common/idpf: not in enabled drivers build config 00:25:28.150 common/ionic: not in enabled drivers build config 00:25:28.150 common/mvep: not in enabled drivers build config 00:25:28.150 common/octeontx: not in enabled drivers build config 00:25:28.150 bus/auxiliary: not in enabled drivers build config 00:25:28.150 bus/cdx: not in enabled drivers build config 00:25:28.150 bus/dpaa: not in enabled drivers build config 00:25:28.150 bus/fslmc: not in enabled drivers build config 00:25:28.150 bus/ifpga: not in enabled drivers build config 00:25:28.150 bus/platform: not in enabled drivers build config 00:25:28.150 bus/uacce: not in enabled drivers build config 00:25:28.150 bus/vmbus: not in enabled drivers build config 00:25:28.150 common/cnxk: not in enabled drivers build config 00:25:28.150 common/mlx5: not in enabled drivers build config 00:25:28.150 common/nfp: not in enabled drivers build config 00:25:28.150 common/nitrox: not in enabled drivers build config 00:25:28.150 common/qat: not in enabled drivers build config 00:25:28.150 common/sfc_efx: not in enabled drivers build config 00:25:28.150 mempool/bucket: not in enabled drivers build config 00:25:28.150 mempool/cnxk: not in enabled drivers build config 00:25:28.150 mempool/dpaa: not in enabled drivers build config 00:25:28.150 mempool/dpaa2: not in enabled drivers build config 00:25:28.150 mempool/octeontx: not in enabled drivers build config 00:25:28.150 mempool/stack: not in enabled drivers build config 00:25:28.150 dma/cnxk: not in enabled drivers build config 00:25:28.150 dma/dpaa: not in enabled drivers build config 00:25:28.150 dma/dpaa2: not in enabled drivers build config 00:25:28.150 dma/hisilicon: not in enabled drivers build config 00:25:28.150 dma/idxd: not in enabled drivers build config 00:25:28.150 dma/ioat: not in enabled drivers build config 00:25:28.150 dma/skeleton: not in enabled drivers build config 00:25:28.150 net/af_packet: not in enabled drivers build config 00:25:28.150 net/af_xdp: not in enabled drivers build config 00:25:28.150 net/ark: not in enabled drivers build config 00:25:28.150 net/atlantic: not in enabled drivers build config 00:25:28.150 net/avp: not in enabled drivers build config 00:25:28.150 net/axgbe: not in enabled drivers build config 00:25:28.150 net/bnx2x: not in enabled drivers build config 00:25:28.150 net/bnxt: not in enabled drivers build config 00:25:28.150 net/bonding: not in enabled drivers build config 00:25:28.150 net/cnxk: not in enabled drivers build config 00:25:28.150 net/cpfl: not in enabled drivers build config 00:25:28.150 net/cxgbe: not in enabled drivers build config 00:25:28.150 net/dpaa: not in enabled drivers build config 00:25:28.150 net/dpaa2: not in enabled drivers build config 00:25:28.150 net/e1000: not in enabled drivers build config 00:25:28.150 net/ena: not in enabled drivers build config 00:25:28.150 net/enetc: not in enabled drivers build config 00:25:28.150 net/enetfec: not in enabled drivers build config 00:25:28.150 net/enic: not in enabled drivers build config 00:25:28.150 net/failsafe: not in enabled drivers build config 00:25:28.150 net/fm10k: not in enabled drivers build config 00:25:28.150 net/gve: not in enabled drivers build config 00:25:28.150 net/hinic: not in enabled drivers build config 00:25:28.150 net/hns3: not in enabled drivers build config 00:25:28.150 net/i40e: not in enabled drivers build config 00:25:28.150 net/iavf: not in enabled drivers build config 00:25:28.150 net/ice: not in enabled drivers build config 00:25:28.150 net/idpf: not in enabled drivers build config 00:25:28.150 net/igc: not in enabled drivers build config 00:25:28.150 net/ionic: not in enabled drivers build config 00:25:28.150 net/ipn3ke: not in enabled drivers build config 00:25:28.150 net/ixgbe: not in enabled drivers build config 00:25:28.150 net/mana: not in enabled drivers build config 00:25:28.150 net/memif: not in enabled drivers build config 00:25:28.150 net/mlx4: not in enabled drivers build config 00:25:28.150 net/mlx5: not in enabled drivers build config 00:25:28.150 net/mvneta: not in enabled drivers build config 00:25:28.150 net/mvpp2: not in enabled drivers build config 00:25:28.150 net/netvsc: not in enabled drivers build config 00:25:28.150 net/nfb: not in enabled drivers build config 00:25:28.150 net/nfp: not in enabled drivers build config 00:25:28.150 net/ngbe: not in enabled drivers build config 00:25:28.150 net/null: not in enabled drivers build config 00:25:28.150 net/octeontx: not in enabled drivers build config 00:25:28.150 net/octeon_ep: not in enabled drivers build config 00:25:28.150 net/pcap: not in enabled drivers build config 00:25:28.150 net/pfe: not in enabled drivers build config 00:25:28.150 net/qede: not in enabled drivers build config 00:25:28.150 net/ring: not in enabled drivers build config 00:25:28.150 net/sfc: not in enabled drivers build config 00:25:28.150 net/softnic: not in enabled drivers build config 00:25:28.150 net/tap: not in enabled drivers build config 00:25:28.150 net/thunderx: not in enabled drivers build config 00:25:28.150 net/txgbe: not in enabled drivers build config 00:25:28.150 net/vdev_netvsc: not in enabled drivers build config 00:25:28.150 net/vhost: not in enabled drivers build config 00:25:28.150 net/virtio: not in enabled drivers build config 00:25:28.150 net/vmxnet3: not in enabled drivers build config 00:25:28.150 raw/*: missing internal dependency, "rawdev" 00:25:28.150 crypto/armv8: not in enabled drivers build config 00:25:28.150 crypto/bcmfs: not in enabled drivers build config 00:25:28.150 crypto/caam_jr: not in enabled drivers build config 00:25:28.150 crypto/ccp: not in enabled drivers build config 00:25:28.150 crypto/cnxk: not in enabled drivers build config 00:25:28.150 crypto/dpaa_sec: not in enabled drivers build config 00:25:28.150 crypto/dpaa2_sec: not in enabled drivers build config 00:25:28.150 crypto/ipsec_mb: not in enabled drivers build config 00:25:28.150 crypto/mlx5: not in enabled drivers build config 00:25:28.150 crypto/mvsam: not in enabled drivers build config 00:25:28.150 crypto/nitrox: not in enabled drivers build config 00:25:28.150 crypto/null: not in enabled drivers build config 00:25:28.150 crypto/octeontx: not in enabled drivers build config 00:25:28.150 crypto/openssl: not in enabled drivers build config 00:25:28.150 crypto/scheduler: not in enabled drivers build config 00:25:28.150 crypto/uadk: not in enabled drivers build config 00:25:28.150 crypto/virtio: not in enabled drivers build config 00:25:28.150 compress/isal: not in enabled drivers build config 00:25:28.150 compress/mlx5: not in enabled drivers build config 00:25:28.150 compress/nitrox: not in enabled drivers build config 00:25:28.150 compress/octeontx: not in enabled drivers build config 00:25:28.150 compress/zlib: not in enabled drivers build config 00:25:28.150 regex/*: missing internal dependency, "regexdev" 00:25:28.150 ml/*: missing internal dependency, "mldev" 00:25:28.150 vdpa/ifc: not in enabled drivers build config 00:25:28.150 vdpa/mlx5: not in enabled drivers build config 00:25:28.150 vdpa/nfp: not in enabled drivers build config 00:25:28.150 vdpa/sfc: not in enabled drivers build config 00:25:28.150 event/*: missing internal dependency, "eventdev" 00:25:28.150 baseband/*: missing internal dependency, "bbdev" 00:25:28.150 gpu/*: missing internal dependency, "gpudev" 00:25:28.150 00:25:28.150 00:25:28.718 Build targets in project: 61 00:25:28.718 00:25:28.718 DPDK 24.03.0 00:25:28.718 00:25:28.718 User defined options 00:25:28.718 default_library : static 00:25:28.718 libdir : lib 00:25:28.718 prefix : /mnt/sdadir/spdk/dpdk/build 00:25:28.718 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Wno-error 00:25:28.718 c_link_args : 00:25:28.718 cpu_instruction_set: native 00:25:28.718 disable_apps : test-pmd,test-mldev,test-pipeline,test-security-perf,proc-info,pdump,test-cmdline,test-sad,test-dma-perf,test-crypto-perf,test-bbdev,test-regex,test-compress-perf,test-gpudev,test-fib,test-acl,test,dumpcap,test-eventdev,graph,test-flow-perf 00:25:28.718 disable_libs : acl,distributor,bitratestats,regexdev,gro,dispatcher,gso,latencystats,pcapng,gpudev,pdump,pipeline,argparse,node,ipsec,pdcp,mldev,lpm,ip_frag,efd,eventdev,sched,jobstats,rawdev,bbdev,bpf,cfgfile,rib,member,metrics,fib,graph,port,table,stack 00:25:28.718 enable_docs : false 00:25:28.718 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:25:28.718 enable_kmods : false 00:25:28.718 max_lcores : 128 00:25:28.718 tests : false 00:25:28.718 00:25:28.718 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:25:29.287 ninja: Entering directory `/mnt/sdadir/spdk/dpdk/build-tmp' 00:25:29.287 [1/244] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:25:29.551 [2/244] Compiling C object lib/librte_log.a.p/log_log.c.o 00:25:29.812 [3/244] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:25:29.812 [4/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:25:29.812 [5/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:25:30.070 [6/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:25:30.070 [7/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:25:30.070 [8/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:25:30.070 [9/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:25:30.070 [10/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:25:30.070 [11/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:25:30.070 [12/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:25:30.070 [13/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:25:30.070 [14/244] Linking static target lib/librte_log.a 00:25:30.070 [15/244] Linking target lib/librte_log.so.24.1 00:25:30.328 [16/244] Linking static target lib/librte_kvargs.a 00:25:30.328 [17/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:25:30.328 [18/244] Linking static target lib/librte_telemetry.a 00:25:30.328 [19/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:25:30.328 [20/244] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:25:30.587 [21/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:25:30.587 [22/244] Linking target lib/librte_kvargs.so.24.1 00:25:30.587 [23/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:25:30.587 [24/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:25:30.847 [25/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:25:30.847 [26/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:25:30.847 [27/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:25:30.847 [28/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:25:30.847 [29/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:25:30.847 [30/244] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:25:30.847 [31/244] Linking target lib/librte_telemetry.so.24.1 00:25:31.106 [32/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:25:31.106 [33/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:25:31.106 [34/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:25:31.106 [35/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:25:31.106 [36/244] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:25:31.106 [37/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:25:31.106 [38/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:25:31.365 [39/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:25:31.365 [40/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:25:31.366 [41/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:25:31.366 [42/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:25:31.625 [43/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:25:31.625 [44/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:25:31.625 [45/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:25:31.625 [46/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:25:31.625 [47/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:25:31.625 [48/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:25:31.625 [49/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:25:31.884 [50/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:25:31.884 [51/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:25:31.884 [52/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:25:31.884 [53/244] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:25:31.884 [54/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:25:31.884 [55/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:25:31.884 [56/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:25:31.884 [57/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:25:31.884 [58/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:25:31.884 [59/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:25:31.884 [60/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:25:32.143 [61/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:25:32.143 [62/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:25:32.143 [63/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:25:32.143 [64/244] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:25:32.401 [65/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:25:32.401 [66/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:25:32.401 [67/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:25:32.401 [68/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:25:32.661 [69/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:25:32.661 [70/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:25:32.661 [71/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:25:32.661 [72/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:25:32.661 [73/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:25:32.661 [74/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:25:32.661 [75/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:25:32.661 [76/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:25:32.661 [77/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:25:32.920 [78/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:25:32.920 [79/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:25:32.920 [80/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:25:33.178 [81/244] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:25:33.178 [82/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:25:33.178 [83/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:25:33.178 [84/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:25:33.178 [85/244] Linking static target lib/librte_ring.a 00:25:33.178 [86/244] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:25:33.178 [87/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:25:33.437 [88/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:25:33.437 [89/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:25:33.437 [90/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:25:33.437 [91/244] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:25:33.437 [92/244] Linking static target lib/librte_mempool.a 00:25:33.437 [93/244] Linking static target lib/net/libnet_crc_avx512_lib.a 00:25:33.696 [94/244] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:25:33.696 [95/244] Linking target lib/librte_eal.so.24.1 00:25:33.696 [96/244] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:25:33.696 [97/244] Linking static target lib/librte_eal.a 00:25:33.696 [98/244] Linking static target lib/librte_rcu.a 00:25:33.697 [99/244] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:25:33.697 [100/244] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:25:33.697 [101/244] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:25:33.697 [102/244] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:25:33.697 [103/244] Linking static target lib/librte_net.a 00:25:33.956 [104/244] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:25:33.956 [105/244] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:25:33.956 [106/244] Linking static target lib/librte_meter.a 00:25:33.956 [107/244] Linking target lib/librte_ring.so.24.1 00:25:33.956 [108/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:25:33.956 [109/244] Linking target lib/librte_meter.so.24.1 00:25:33.956 [110/244] Linking static target lib/librte_mbuf.a 00:25:33.956 [111/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:25:34.215 [112/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:25:34.215 [113/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:25:34.215 [114/244] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:25:34.215 [115/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:25:34.215 [116/244] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:25:34.215 [117/244] Linking target lib/librte_rcu.so.24.1 00:25:34.215 [118/244] Linking target lib/librte_mempool.so.24.1 00:25:34.474 [119/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:25:34.474 [120/244] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:25:34.474 [121/244] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:25:34.474 [122/244] Linking target lib/librte_mbuf.so.24.1 00:25:34.733 [123/244] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:25:34.733 [124/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:25:34.991 [125/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:25:34.991 [126/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:25:34.991 [127/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:25:34.991 [128/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:25:34.991 [129/244] Linking target lib/librte_net.so.24.1 00:25:34.991 [130/244] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:25:34.991 [131/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:25:35.250 [132/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:25:35.250 [133/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:25:35.250 [134/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:25:35.250 [135/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:25:35.250 [136/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:25:35.250 [137/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:25:35.250 [138/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:25:35.250 [139/244] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:25:35.250 [140/244] Linking static target lib/librte_pci.a 00:25:35.509 [141/244] Linking target lib/librte_pci.so.24.1 00:25:35.509 [142/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:25:35.509 [143/244] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:25:35.509 [144/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:25:35.768 [145/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:25:35.768 [146/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:25:35.768 [147/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:25:35.768 [148/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:25:35.768 [149/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:25:35.768 [150/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:25:35.768 [151/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:25:35.768 [152/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:25:35.768 [153/244] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:25:36.028 [154/244] Linking static target lib/librte_cmdline.a 00:25:36.028 [155/244] Linking target lib/librte_cmdline.so.24.1 00:25:36.028 [156/244] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:25:36.028 [157/244] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:25:36.287 [158/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:25:36.287 [159/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:25:36.546 [160/244] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:25:36.546 [161/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:25:36.546 [162/244] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:25:36.546 [163/244] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:25:36.546 [164/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:25:36.546 [165/244] Linking static target lib/librte_timer.a 00:25:36.546 [166/244] Linking target lib/librte_timer.so.24.1 00:25:36.546 [167/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:25:36.546 [168/244] Linking static target lib/librte_compressdev.a 00:25:36.546 [169/244] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:25:36.546 [170/244] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:25:36.805 [171/244] Linking target lib/librte_compressdev.so.24.1 00:25:36.805 [172/244] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:25:36.805 [173/244] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:25:36.805 [174/244] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:25:37.064 [175/244] Linking static target lib/librte_dmadev.a 00:25:37.064 [176/244] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:25:37.064 [177/244] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:25:37.064 [178/244] Linking target lib/librte_dmadev.so.24.1 00:25:37.064 [179/244] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:25:37.323 [180/244] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:25:37.323 [181/244] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:25:37.323 [182/244] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:25:37.323 [183/244] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:25:37.323 [184/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:25:37.323 [185/244] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:25:37.582 [186/244] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:25:37.582 [187/244] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:25:37.582 [188/244] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:25:37.582 [189/244] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:25:37.582 [190/244] Linking static target lib/librte_reorder.a 00:25:37.582 [191/244] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:25:37.842 [192/244] Linking static target lib/librte_hash.a 00:25:37.842 [193/244] Linking target lib/librte_ethdev.so.24.1 00:25:37.842 [194/244] Linking static target lib/librte_security.a 00:25:37.842 [195/244] Linking target lib/librte_hash.so.24.1 00:25:37.842 [196/244] Linking static target lib/librte_power.a 00:25:37.842 [197/244] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:25:37.842 [198/244] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:25:37.842 [199/244] Linking target lib/librte_reorder.so.24.1 00:25:37.842 [200/244] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:25:37.842 [201/244] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:25:37.842 [202/244] Linking static target lib/librte_ethdev.a 00:25:38.101 [203/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:25:38.101 [204/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:25:38.101 [205/244] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:25:38.101 [206/244] Linking target lib/librte_power.so.24.1 00:25:38.101 [207/244] Linking static target lib/librte_cryptodev.a 00:25:38.101 [208/244] Linking target lib/librte_cryptodev.so.24.1 00:25:38.361 [209/244] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:25:38.361 [210/244] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:25:38.621 [211/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:25:38.621 [212/244] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:25:38.621 [213/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:25:38.621 [214/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:25:38.621 [215/244] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:25:38.621 [216/244] Linking target lib/librte_security.so.24.1 00:25:38.621 [217/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:25:38.880 [218/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:25:38.880 [219/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:25:38.880 [220/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:25:39.139 [221/244] Linking static target drivers/libtmp_rte_bus_pci.a 00:25:39.139 [222/244] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:25:39.139 [223/244] Linking static target drivers/libtmp_rte_bus_vdev.a 00:25:39.399 [224/244] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:25:39.399 [225/244] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:25:39.399 [226/244] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:25:39.399 [227/244] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:25:39.399 [228/244] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:25:39.659 [229/244] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:25:39.659 [230/244] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:25:39.659 [231/244] Linking static target drivers/libtmp_rte_mempool_ring.a 00:25:39.659 [232/244] Linking static target drivers/librte_bus_vdev.a 00:25:39.659 [233/244] Linking static target drivers/librte_bus_pci.a 00:25:39.659 [234/244] Linking target drivers/librte_bus_pci.so.24.1 00:25:39.659 [235/244] Linking target drivers/librte_bus_vdev.so.24.1 00:25:39.918 [236/244] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:25:39.918 [237/244] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:25:39.918 [238/244] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:25:39.918 [239/244] Linking static target drivers/librte_mempool_ring.a 00:25:39.918 [240/244] Linking target drivers/librte_mempool_ring.so.24.1 00:25:41.299 [241/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:25:47.870 [242/244] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:25:47.870 [243/244] Linking target lib/librte_vhost.so.24.1 00:25:47.870 [244/244] Linking static target lib/librte_vhost.a 00:25:47.870 INFO: autodetecting backend as ninja 00:25:47.870 INFO: calculating backend command to run: /usr/local/bin/ninja -C /mnt/sdadir/spdk/dpdk/build-tmp 00:25:52.066 CC lib/log/log_flags.o 00:25:52.066 CC lib/log/log.o 00:25:52.066 CC lib/log/log_deprecated.o 00:25:52.066 CC lib/ut_mock/mock.o 00:25:52.636 LIB libspdk_ut_mock.a 00:25:52.636 LIB libspdk_log.a 00:25:52.910 CC lib/ioat/ioat.o 00:25:52.910 CC lib/util/base64.o 00:25:52.910 CC lib/dma/dma.o 00:25:52.910 CC lib/util/bit_array.o 00:25:52.910 CC lib/util/cpuset.o 00:25:52.910 CC lib/util/crc16.o 00:25:52.910 CC lib/util/crc32.o 00:25:52.910 CC lib/util/crc32c.o 00:25:52.910 CXX lib/trace_parser/trace.o 00:25:52.910 CC lib/util/crc32_ieee.o 00:25:52.910 CC lib/util/crc64.o 00:25:52.910 CC lib/util/dif.o 00:25:52.910 CC lib/util/fd.o 00:25:52.910 CC lib/util/fd_group.o 00:25:52.910 CC lib/util/file.o 00:25:52.910 CC lib/util/hexlify.o 00:25:52.910 CC lib/util/iov.o 00:25:52.910 CC lib/util/math.o 00:25:52.910 CC lib/util/net.o 00:25:52.910 CC lib/util/pipe.o 00:25:52.910 CC lib/util/strerror_tls.o 00:25:52.910 CC lib/util/string.o 00:25:52.910 CC lib/util/uuid.o 00:25:52.910 CC lib/util/xor.o 00:25:52.910 CC lib/util/zipf.o 00:25:53.186 CC lib/vfio_user/host/vfio_user_pci.o 00:25:53.186 CC lib/vfio_user/host/vfio_user.o 00:25:53.445 LIB libspdk_dma.a 00:25:53.445 LIB libspdk_ioat.a 00:25:53.704 LIB libspdk_vfio_user.a 00:25:53.963 LIB libspdk_trace_parser.a 00:25:54.223 LIB libspdk_util.a 00:25:54.791 CC lib/vmd/vmd.o 00:25:54.791 CC lib/vmd/led.o 00:25:54.791 CC lib/json/json_parse.o 00:25:54.791 CC lib/json/json_util.o 00:25:54.791 CC lib/json/json_write.o 00:25:54.791 CC lib/conf/conf.o 00:25:54.791 CC lib/env_dpdk/env.o 00:25:54.791 CC lib/env_dpdk/memory.o 00:25:54.791 CC lib/env_dpdk/pci.o 00:25:54.791 CC lib/env_dpdk/init.o 00:25:54.791 CC lib/env_dpdk/threads.o 00:25:54.791 CC lib/env_dpdk/pci_ioat.o 00:25:54.791 CC lib/env_dpdk/pci_virtio.o 00:25:54.791 CC lib/env_dpdk/pci_vmd.o 00:25:54.791 CC lib/env_dpdk/pci_idxd.o 00:25:54.791 CC lib/env_dpdk/pci_event.o 00:25:54.791 CC lib/env_dpdk/sigbus_handler.o 00:25:54.791 CC lib/env_dpdk/pci_dpdk.o 00:25:54.791 CC lib/env_dpdk/pci_dpdk_2207.o 00:25:54.791 CC lib/env_dpdk/pci_dpdk_2211.o 00:25:55.359 LIB libspdk_conf.a 00:25:55.619 LIB libspdk_vmd.a 00:25:55.619 LIB libspdk_json.a 00:25:56.187 CC lib/jsonrpc/jsonrpc_server.o 00:25:56.187 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:25:56.187 CC lib/jsonrpc/jsonrpc_client.o 00:25:56.187 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:25:56.187 LIB libspdk_env_dpdk.a 00:25:56.447 LIB libspdk_jsonrpc.a 00:25:57.015 CC lib/rpc/rpc.o 00:25:57.580 LIB libspdk_rpc.a 00:25:57.838 CC lib/trace/trace.o 00:25:57.838 CC lib/trace/trace_flags.o 00:25:57.838 CC lib/trace/trace_rpc.o 00:25:57.838 CC lib/keyring/keyring.o 00:25:57.838 CC lib/keyring/keyring_rpc.o 00:25:57.838 CC lib/notify/notify.o 00:25:57.838 CC lib/notify/notify_rpc.o 00:25:58.098 LIB libspdk_notify.a 00:25:58.357 LIB libspdk_keyring.a 00:25:58.357 LIB libspdk_trace.a 00:25:58.925 CC lib/sock/sock.o 00:25:58.925 CC lib/sock/sock_rpc.o 00:25:58.925 CC lib/thread/thread.o 00:25:58.925 CC lib/thread/iobuf.o 00:25:59.494 LIB libspdk_sock.a 00:25:59.753 CC lib/nvme/nvme_ctrlr_cmd.o 00:25:59.753 CC lib/nvme/nvme_fabric.o 00:25:59.753 CC lib/nvme/nvme_ctrlr.o 00:25:59.753 CC lib/nvme/nvme_ns.o 00:25:59.753 CC lib/nvme/nvme_pcie_common.o 00:25:59.753 CC lib/nvme/nvme_ns_cmd.o 00:25:59.753 CC lib/nvme/nvme.o 00:25:59.753 CC lib/nvme/nvme_qpair.o 00:25:59.753 CC lib/nvme/nvme_discovery.o 00:25:59.753 CC lib/nvme/nvme_quirks.o 00:25:59.753 CC lib/nvme/nvme_transport.o 00:25:59.753 CC lib/nvme/nvme_pcie.o 00:25:59.753 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:25:59.753 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:26:00.012 CC lib/nvme/nvme_tcp.o 00:26:00.012 CC lib/nvme/nvme_opal.o 00:26:00.012 CC lib/nvme/nvme_io_msg.o 00:26:00.012 CC lib/nvme/nvme_poll_group.o 00:26:00.012 CC lib/nvme/nvme_zns.o 00:26:00.012 CC lib/nvme/nvme_stubs.o 00:26:00.012 CC lib/nvme/nvme_auth.o 00:26:00.012 CC lib/nvme/nvme_cuse.o 00:26:00.272 LIB libspdk_thread.a 00:26:01.209 CC lib/init/json_config.o 00:26:01.209 CC lib/init/subsystem.o 00:26:01.209 CC lib/init/subsystem_rpc.o 00:26:01.209 CC lib/init/rpc.o 00:26:01.209 CC lib/blob/blobstore.o 00:26:01.209 CC lib/accel/accel.o 00:26:01.209 CC lib/accel/accel_rpc.o 00:26:01.209 CC lib/blob/request.o 00:26:01.209 CC lib/blob/zeroes.o 00:26:01.209 CC lib/blob/blob_bs_dev.o 00:26:01.209 CC lib/accel/accel_sw.o 00:26:01.209 CC lib/virtio/virtio.o 00:26:01.209 CC lib/virtio/virtio_vhost_user.o 00:26:01.209 CC lib/virtio/virtio_vfio_user.o 00:26:01.209 CC lib/virtio/virtio_pci.o 00:26:02.145 LIB libspdk_init.a 00:26:02.145 LIB libspdk_virtio.a 00:26:02.710 LIB libspdk_accel.a 00:26:02.710 CC lib/event/app.o 00:26:02.710 CC lib/event/reactor.o 00:26:02.710 CC lib/event/app_rpc.o 00:26:02.710 CC lib/event/scheduler_static.o 00:26:02.710 CC lib/event/log_rpc.o 00:26:03.276 LIB libspdk_nvme.a 00:26:03.276 LIB libspdk_event.a 00:26:03.276 CC lib/bdev/bdev_zone.o 00:26:03.276 CC lib/bdev/bdev.o 00:26:03.276 CC lib/bdev/bdev_rpc.o 00:26:03.276 CC lib/bdev/scsi_nvme.o 00:26:03.276 CC lib/bdev/part.o 00:26:04.653 LIB libspdk_blob.a 00:26:05.589 CC lib/lvol/lvol.o 00:26:05.589 CC lib/blobfs/tree.o 00:26:05.589 CC lib/blobfs/blobfs.o 00:26:05.848 LIB libspdk_bdev.a 00:26:06.107 LIB libspdk_blobfs.a 00:26:06.674 LIB libspdk_lvol.a 00:26:06.933 CC lib/nvmf/ctrlr.o 00:26:06.933 CC lib/nvmf/ctrlr_discovery.o 00:26:06.933 CC lib/nvmf/ctrlr_bdev.o 00:26:06.933 CC lib/nvmf/subsystem.o 00:26:06.933 CC lib/nvmf/nvmf.o 00:26:06.933 CC lib/nvmf/nvmf_rpc.o 00:26:06.933 CC lib/nvmf/transport.o 00:26:06.933 CC lib/nvmf/tcp.o 00:26:06.933 CC lib/nvmf/stubs.o 00:26:06.933 CC lib/ftl/ftl_core.o 00:26:06.933 CC lib/nbd/nbd.o 00:26:06.934 CC lib/ftl/ftl_init.o 00:26:06.934 CC lib/nvmf/mdns_server.o 00:26:06.934 CC lib/ftl/ftl_layout.o 00:26:06.934 CC lib/nvmf/auth.o 00:26:06.934 CC lib/ftl/ftl_debug.o 00:26:06.934 CC lib/nbd/nbd_rpc.o 00:26:06.934 CC lib/scsi/dev.o 00:26:06.934 CC lib/ftl/ftl_io.o 00:26:06.934 CC lib/scsi/lun.o 00:26:06.934 CC lib/scsi/port.o 00:26:06.934 CC lib/ftl/ftl_sb.o 00:26:06.934 CC lib/scsi/scsi.o 00:26:06.934 CC lib/ftl/ftl_l2p.o 00:26:06.934 CC lib/scsi/scsi_pr.o 00:26:06.934 CC lib/scsi/scsi_rpc.o 00:26:06.934 CC lib/scsi/scsi_bdev.o 00:26:06.934 CC lib/ftl/ftl_l2p_flat.o 00:26:06.934 CC lib/scsi/task.o 00:26:06.934 CC lib/ftl/ftl_nv_cache.o 00:26:06.934 CC lib/ftl/ftl_band.o 00:26:06.934 CC lib/ftl/ftl_band_ops.o 00:26:06.934 CC lib/ftl/ftl_rq.o 00:26:06.934 CC lib/ftl/ftl_reloc.o 00:26:06.934 CC lib/ftl/ftl_writer.o 00:26:06.934 CC lib/ftl/ftl_l2p_cache.o 00:26:06.934 CC lib/ftl/ftl_p2l.o 00:26:06.934 CC lib/ftl/mngt/ftl_mngt.o 00:26:06.934 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:26:06.934 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:26:06.934 CC lib/ftl/mngt/ftl_mngt_startup.o 00:26:06.934 CC lib/ftl/mngt/ftl_mngt_md.o 00:26:06.934 CC lib/ftl/mngt/ftl_mngt_misc.o 00:26:06.934 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:26:06.934 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:26:06.934 CC lib/ftl/mngt/ftl_mngt_band.o 00:26:06.934 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:26:06.934 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:26:06.934 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:26:06.934 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:26:06.934 CC lib/ftl/utils/ftl_conf.o 00:26:07.194 CC lib/ftl/utils/ftl_md.o 00:26:07.194 CC lib/ftl/utils/ftl_mempool.o 00:26:07.194 CC lib/ftl/utils/ftl_bitmap.o 00:26:07.194 CC lib/ftl/utils/ftl_property.o 00:26:07.194 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:26:07.194 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:26:07.194 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:26:07.194 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:26:07.194 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:26:07.194 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:26:07.194 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:26:07.194 CC lib/ftl/upgrade/ftl_sb_v3.o 00:26:07.194 CC lib/ftl/upgrade/ftl_sb_v5.o 00:26:07.194 CC lib/ftl/nvc/ftl_nvc_dev.o 00:26:07.194 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:26:07.194 CC lib/ftl/base/ftl_base_dev.o 00:26:07.194 CC lib/ftl/base/ftl_base_bdev.o 00:26:09.134 LIB libspdk_nbd.a 00:26:09.134 LIB libspdk_scsi.a 00:26:09.134 LIB libspdk_ftl.a 00:26:09.393 CC lib/vhost/vhost.o 00:26:09.393 CC lib/vhost/vhost_scsi.o 00:26:09.393 CC lib/vhost/vhost_rpc.o 00:26:09.393 CC lib/vhost/rte_vhost_user.o 00:26:09.393 CC lib/vhost/vhost_blk.o 00:26:09.393 CC lib/iscsi/conn.o 00:26:09.393 CC lib/iscsi/iscsi.o 00:26:09.393 CC lib/iscsi/init_grp.o 00:26:09.393 CC lib/iscsi/md5.o 00:26:09.393 CC lib/iscsi/param.o 00:26:09.393 CC lib/iscsi/portal_grp.o 00:26:09.393 CC lib/iscsi/tgt_node.o 00:26:09.393 CC lib/iscsi/iscsi_subsystem.o 00:26:09.393 CC lib/iscsi/iscsi_rpc.o 00:26:09.393 CC lib/iscsi/task.o 00:26:09.653 LIB libspdk_nvmf.a 00:26:10.591 LIB libspdk_vhost.a 00:26:11.159 LIB libspdk_iscsi.a 00:26:14.455 CC module/env_dpdk/env_dpdk_rpc.o 00:26:14.455 CC module/blob/bdev/blob_bdev.o 00:26:14.455 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:26:14.455 CC module/keyring/file/keyring.o 00:26:14.455 CC module/keyring/file/keyring_rpc.o 00:26:14.455 CC module/scheduler/dynamic/scheduler_dynamic.o 00:26:14.455 CC module/accel/error/accel_error.o 00:26:14.455 CC module/accel/error/accel_error_rpc.o 00:26:14.455 CC module/keyring/linux/keyring.o 00:26:14.455 CC module/scheduler/gscheduler/gscheduler.o 00:26:14.455 CC module/keyring/linux/keyring_rpc.o 00:26:14.455 CC module/accel/ioat/accel_ioat.o 00:26:14.455 CC module/sock/posix/posix.o 00:26:14.455 CC module/accel/ioat/accel_ioat_rpc.o 00:26:14.455 LIB libspdk_env_dpdk_rpc.a 00:26:14.455 LIB libspdk_scheduler_dpdk_governor.a 00:26:14.455 LIB libspdk_keyring_linux.a 00:26:14.455 LIB libspdk_scheduler_gscheduler.a 00:26:14.455 LIB libspdk_keyring_file.a 00:26:14.455 LIB libspdk_accel_error.a 00:26:14.714 LIB libspdk_scheduler_dynamic.a 00:26:14.714 LIB libspdk_accel_ioat.a 00:26:14.714 LIB libspdk_blob_bdev.a 00:26:14.974 LIB libspdk_sock_posix.a 00:26:15.233 CC module/blobfs/bdev/blobfs_bdev.o 00:26:15.233 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:26:15.233 CC module/bdev/nvme/bdev_nvme.o 00:26:15.233 CC module/bdev/nvme/bdev_nvme_rpc.o 00:26:15.233 CC module/bdev/nvme/nvme_rpc.o 00:26:15.233 CC module/bdev/error/vbdev_error_rpc.o 00:26:15.233 CC module/bdev/nvme/bdev_mdns_client.o 00:26:15.233 CC module/bdev/nvme/vbdev_opal.o 00:26:15.233 CC module/bdev/nvme/vbdev_opal_rpc.o 00:26:15.233 CC module/bdev/lvol/vbdev_lvol.o 00:26:15.233 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:26:15.233 CC module/bdev/error/vbdev_error.o 00:26:15.233 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:26:15.233 CC module/bdev/aio/bdev_aio.o 00:26:15.233 CC module/bdev/malloc/bdev_malloc.o 00:26:15.233 CC module/bdev/aio/bdev_aio_rpc.o 00:26:15.233 CC module/bdev/null/bdev_null.o 00:26:15.233 CC module/bdev/delay/vbdev_delay.o 00:26:15.233 CC module/bdev/malloc/bdev_malloc_rpc.o 00:26:15.233 CC module/bdev/delay/vbdev_delay_rpc.o 00:26:15.233 CC module/bdev/null/bdev_null_rpc.o 00:26:15.233 CC module/bdev/virtio/bdev_virtio_scsi.o 00:26:15.233 CC module/bdev/gpt/gpt.o 00:26:15.233 CC module/bdev/passthru/vbdev_passthru.o 00:26:15.233 CC module/bdev/virtio/bdev_virtio_blk.o 00:26:15.233 CC module/bdev/gpt/vbdev_gpt.o 00:26:15.233 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:26:15.233 CC module/bdev/virtio/bdev_virtio_rpc.o 00:26:15.233 CC module/bdev/ftl/bdev_ftl.o 00:26:15.233 CC module/bdev/zone_block/vbdev_zone_block.o 00:26:15.233 CC module/bdev/ftl/bdev_ftl_rpc.o 00:26:15.233 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:26:15.233 CC module/bdev/split/vbdev_split.o 00:26:15.233 CC module/bdev/split/vbdev_split_rpc.o 00:26:15.233 CC module/bdev/raid/bdev_raid.o 00:26:15.493 CC module/bdev/raid/bdev_raid_rpc.o 00:26:15.493 CC module/bdev/raid/bdev_raid_sb.o 00:26:15.493 CC module/bdev/raid/raid0.o 00:26:15.493 CC module/bdev/raid/raid1.o 00:26:15.493 CC module/bdev/raid/concat.o 00:26:16.062 LIB libspdk_blobfs_bdev.a 00:26:16.062 LIB libspdk_bdev_split.a 00:26:16.321 LIB libspdk_bdev_null.a 00:26:16.321 LIB libspdk_bdev_error.a 00:26:16.321 LIB libspdk_bdev_gpt.a 00:26:16.321 LIB libspdk_bdev_zone_block.a 00:26:16.321 LIB libspdk_bdev_passthru.a 00:26:16.321 LIB libspdk_bdev_aio.a 00:26:16.321 LIB libspdk_bdev_ftl.a 00:26:16.321 LIB libspdk_bdev_delay.a 00:26:16.321 LIB libspdk_bdev_malloc.a 00:26:16.581 LIB libspdk_bdev_virtio.a 00:26:16.581 LIB libspdk_bdev_lvol.a 00:26:16.840 LIB libspdk_bdev_raid.a 00:26:18.218 LIB libspdk_bdev_nvme.a 00:26:19.598 CC module/event/subsystems/scheduler/scheduler.o 00:26:19.598 CC module/event/subsystems/sock/sock.o 00:26:19.598 CC module/event/subsystems/vmd/vmd.o 00:26:19.598 CC module/event/subsystems/vmd/vmd_rpc.o 00:26:19.598 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:26:19.598 CC module/event/subsystems/iobuf/iobuf.o 00:26:19.598 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:26:19.598 CC module/event/subsystems/keyring/keyring.o 00:26:19.858 LIB libspdk_event_vhost_blk.a 00:26:19.858 LIB libspdk_event_keyring.a 00:26:19.858 LIB libspdk_event_scheduler.a 00:26:19.858 LIB libspdk_event_sock.a 00:26:19.858 LIB libspdk_event_iobuf.a 00:26:19.858 LIB libspdk_event_vmd.a 00:26:20.426 CC module/event/subsystems/accel/accel.o 00:26:20.686 LIB libspdk_event_accel.a 00:26:21.255 CC module/event/subsystems/bdev/bdev.o 00:26:21.514 LIB libspdk_event_bdev.a 00:26:21.772 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:26:21.772 CC module/event/subsystems/nbd/nbd.o 00:26:21.772 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:26:21.772 CC module/event/subsystems/scsi/scsi.o 00:26:22.030 LIB libspdk_event_nbd.a 00:26:22.030 LIB libspdk_event_scsi.a 00:26:22.289 LIB libspdk_event_nvmf.a 00:26:22.548 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:26:22.548 CC module/event/subsystems/iscsi/iscsi.o 00:26:22.806 LIB libspdk_event_vhost_scsi.a 00:26:22.806 LIB libspdk_event_iscsi.a 00:26:23.064 make[1]: Nothing to be done for 'all'. 00:26:23.323 CC app/spdk_lspci/spdk_lspci.o 00:26:23.323 CC app/trace_record/trace_record.o 00:26:23.323 CC app/spdk_nvme_identify/identify.o 00:26:23.323 CC app/spdk_top/spdk_top.o 00:26:23.323 CC app/spdk_nvme_perf/perf.o 00:26:23.323 CXX app/trace/trace.o 00:26:23.323 CC app/spdk_nvme_discover/discovery_aer.o 00:26:23.323 CC examples/interrupt_tgt/interrupt_tgt.o 00:26:23.323 CC app/nvmf_tgt/nvmf_main.o 00:26:23.323 CC app/iscsi_tgt/iscsi_tgt.o 00:26:23.582 CC app/spdk_dd/spdk_dd.o 00:26:23.582 CC app/spdk_tgt/spdk_tgt.o 00:26:23.582 CC examples/util/zipf/zipf.o 00:26:23.582 CC examples/ioat/perf/perf.o 00:26:23.582 CC examples/ioat/verify/verify.o 00:26:23.841 LINK spdk_lspci 00:26:23.841 LINK nvmf_tgt 00:26:23.841 LINK zipf 00:26:23.841 LINK interrupt_tgt 00:26:23.841 LINK iscsi_tgt 00:26:24.100 LINK verify 00:26:24.100 LINK spdk_nvme_discover 00:26:24.100 LINK ioat_perf 00:26:24.100 LINK spdk_trace_record 00:26:24.100 LINK spdk_tgt 00:26:24.100 LINK spdk_trace 00:26:24.100 LINK spdk_dd 00:26:24.668 LINK spdk_nvme_perf 00:26:24.928 LINK spdk_top 00:26:24.928 LINK spdk_nvme_identify 00:26:25.864 CC app/vhost/vhost.o 00:26:26.123 LINK vhost 00:26:34.305 CC examples/vmd/led/led.o 00:26:34.305 CC examples/sock/hello_world/hello_sock.o 00:26:34.305 CC examples/vmd/lsvmd/lsvmd.o 00:26:34.305 CC examples/thread/thread/thread_ex.o 00:26:34.305 LINK lsvmd 00:26:34.305 LINK led 00:26:34.564 LINK hello_sock 00:26:34.564 LINK thread 00:26:37.100 CC examples/nvme/hello_world/hello_world.o 00:26:37.100 CC examples/nvme/cmb_copy/cmb_copy.o 00:26:37.100 CC examples/nvme/reconnect/reconnect.o 00:26:37.100 CC examples/nvme/arbitration/arbitration.o 00:26:37.100 CC examples/nvme/hotplug/hotplug.o 00:26:37.101 CC examples/nvme/abort/abort.o 00:26:37.101 CC examples/nvme/nvme_manage/nvme_manage.o 00:26:37.101 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:26:37.360 LINK cmb_copy 00:26:37.360 LINK hotplug 00:26:37.619 LINK pmr_persistence 00:26:37.619 LINK hello_world 00:26:37.619 LINK reconnect 00:26:37.619 LINK abort 00:26:37.619 LINK arbitration 00:26:37.878 LINK nvme_manage 00:26:44.448 CC examples/accel/perf/accel_perf.o 00:26:44.448 CC examples/blob/hello_world/hello_blob.o 00:26:44.448 CC examples/blob/cli/blobcli.o 00:26:44.448 LINK hello_blob 00:26:44.448 LINK accel_perf 00:26:44.448 LINK blobcli 00:26:47.761 CC examples/bdev/hello_world/hello_bdev.o 00:26:47.761 CC examples/bdev/bdevperf/bdevperf.o 00:26:47.761 LINK hello_bdev 00:26:48.698 LINK bdevperf 00:26:53.973 CC examples/nvmf/nvmf/nvmf.o 00:26:54.542 LINK nvmf 00:27:01.114 make: Leaving directory '/mnt/sdadir/spdk' 00:27:01.114 22:28:32 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@101 -- # rm -rf /mnt/sdadir/spdk 00:27:39.863 22:29:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@102 -- # umount /mnt/sdadir 00:27:39.863 22:29:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@103 -- # rm -rf /mnt/sdadir 00:27:39.863 22:29:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@105 -- # stats=($(cat "/sys/block/$dev/stat")) 00:27:39.863 22:29:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@105 -- # cat /sys/block/sda/stat 00:27:39.863 READ IO cnt: 101 merges: 0 sectors: 3344 ticks: 73 00:27:39.863 WRITE IO cnt: 627452 merges: 590464 sectors: 10213712 ticks: 451417 00:27:39.863 in flight: 0 io ticks: 203640 time in queue: 487193 00:27:39.863 22:29:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@107 -- # printf 'READ IO cnt: % 8u merges: % 8u sectors: % 8u ticks: % 8u\n' 101 0 3344 73 00:27:39.863 22:29:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@109 -- # printf 'WRITE IO cnt: % 8u merges: % 8u sectors: % 8u ticks: % 8u\n' 627452 590464 10213712 451417 00:27:39.863 22:29:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@111 -- # printf 'in flight: % 8u io ticks: % 8u time in queue: % 8u\n' 0 203640 487193 00:27:39.863 22:29:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@1 -- # cleanup 00:27:39.863 22:29:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_delete Nvme0n1 00:27:39.863 [2024-07-23 22:29:08.486187] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1p0) received event(SPDK_BDEV_EVENT_REMOVE) 00:27:39.863 22:29:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@13 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_delete EE_Malloc0 00:27:39.863 22:29:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@15 -- # killprocess 96865 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@948 -- # '[' -z 96865 ']' 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@952 -- # kill -0 96865 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@953 -- # uname 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96865 00:27:39.863 killing process with pid 96865 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96865' 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@967 -- # kill 96865 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@972 -- # wait 96865 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@17 -- # mountpoint -q /mnt/sdadir 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@18 -- # rm -rf /mnt/sdadir 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@20 -- # iscsicleanup 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:27:39.863 Cleaning up iSCSI connection 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:27:39.863 Logging out of session [sid: 72, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] 00:27:39.863 Logout of [sid: 72, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@983 -- # rm -rf 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@21 -- # iscsitestfini 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:27:39.863 00:27:39.863 real 5m35.713s 00:27:39.863 user 9m16.111s 00:27:39.863 sys 3m4.325s 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:39.863 22:29:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:27:39.863 ************************************ 00:27:39.863 END TEST iscsi_tgt_ext4test 00:27:39.863 ************************************ 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@49 -- # '[' 0 -eq 1 ']' 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@57 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@59 -- # '[' 0 -eq 1 ']' 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@65 -- # cleanup_veth_interfaces 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/common.sh@95 -- # ip link set init_br nomaster 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/common.sh@96 -- # ip link set tgt_br nomaster 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/common.sh@97 -- # ip link set tgt_br2 nomaster 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/common.sh@98 -- # ip link set init_br down 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/common.sh@99 -- # ip link set tgt_br down 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/common.sh@100 -- # ip link set tgt_br2 down 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/common.sh@101 -- # ip link delete iscsi_br type bridge 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/common.sh@102 -- # ip link delete spdk_init_int 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/common.sh@103 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/common.sh@104 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/common.sh@105 -- # ip netns del spdk_iscsi_ns 00:27:39.863 22:29:09 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:27:39.863 00:27:39.863 real 18m30.252s 00:27:39.863 user 32m40.599s 00:27:39.863 sys 7m31.083s 00:27:39.863 22:29:09 iscsi_tgt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:39.863 22:29:09 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:27:39.863 ************************************ 00:27:39.863 END TEST iscsi_tgt 00:27:39.863 ************************************ 00:27:39.863 22:29:09 -- spdk/autotest.sh@264 -- # run_test spdkcli_iscsi /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:27:39.863 22:29:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:39.863 22:29:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.863 22:29:09 -- common/autotest_common.sh@10 -- # set +x 00:27:39.863 ************************************ 00:27:39.863 START TEST spdkcli_iscsi 00:27:39.863 ************************************ 00:27:39.863 22:29:09 spdkcli_iscsi -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:27:39.863 * Looking for test storage... 00:27:39.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:27:39.863 22:29:09 spdkcli_iscsi -- spdkcli/iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:27:39.863 22:29:09 spdkcli_iscsi -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:27:39.863 22:29:09 spdkcli_iscsi -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:27:39.863 22:29:09 spdkcli_iscsi -- spdkcli/iscsi.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:27:39.863 22:29:09 spdkcli_iscsi -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:27:39.863 22:29:09 spdkcli_iscsi -- spdkcli/iscsi.sh@12 -- # MATCH_FILE=spdkcli_iscsi.test 00:27:39.863 22:29:09 spdkcli_iscsi -- spdkcli/iscsi.sh@13 -- # SPDKCLI_BRANCH=/iscsi 00:27:39.863 22:29:09 spdkcli_iscsi -- spdkcli/iscsi.sh@15 -- # trap cleanup EXIT 00:27:39.863 22:29:09 spdkcli_iscsi -- spdkcli/iscsi.sh@17 -- # timing_enter run_iscsi_tgt 00:27:39.863 22:29:09 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:39.863 22:29:09 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:27:39.864 22:29:09 spdkcli_iscsi -- spdkcli/iscsi.sh@21 -- # iscsi_tgt_pid=135704 00:27:39.864 22:29:09 spdkcli_iscsi -- spdkcli/iscsi.sh@22 -- # waitforlisten 135704 00:27:39.864 22:29:09 spdkcli_iscsi -- spdkcli/iscsi.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x3 -p 0 --wait-for-rpc 00:27:39.864 22:29:09 spdkcli_iscsi -- common/autotest_common.sh@829 -- # '[' -z 135704 ']' 00:27:39.864 22:29:09 spdkcli_iscsi -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.864 22:29:09 spdkcli_iscsi -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:39.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.864 22:29:09 spdkcli_iscsi -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.864 22:29:09 spdkcli_iscsi -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:39.864 22:29:09 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:27:39.864 [2024-07-23 22:29:10.050286] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:27:39.864 [2024-07-23 22:29:10.050371] [ DPDK EAL parameters: iscsi --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135704 ] 00:27:39.864 [2024-07-23 22:29:10.176661] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:39.864 [2024-07-23 22:29:10.193515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:39.864 [2024-07-23 22:29:10.235879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.864 [2024-07-23 22:29:10.235901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.864 22:29:10 spdkcli_iscsi -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:39.864 22:29:10 spdkcli_iscsi -- common/autotest_common.sh@862 -- # return 0 00:27:39.864 22:29:10 spdkcli_iscsi -- spdkcli/iscsi.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:27:39.864 [2024-07-23 22:29:11.238055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:39.864 22:29:11 spdkcli_iscsi -- spdkcli/iscsi.sh@25 -- # timing_exit run_iscsi_tgt 00:27:39.864 22:29:11 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:39.864 22:29:11 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:27:39.864 22:29:11 spdkcli_iscsi -- spdkcli/iscsi.sh@27 -- # timing_enter spdkcli_create_iscsi_config 00:27:39.864 22:29:11 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:39.864 22:29:11 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:27:39.864 22:29:11 spdkcli_iscsi -- spdkcli/iscsi.sh@48 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc0'\'' '\''Malloc0'\'' True 00:27:39.864 '\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:39.864 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:39.864 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:39.864 '\''/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"'\'' '\''host=127.0.0.1, port=3261'\'' True 00:27:39.864 '\''/iscsi/portal_groups create 2 127.0.0.1:3262'\'' '\''host=127.0.0.1, port=3262'\'' True 00:27:39.864 '\''/iscsi/initiator_groups create 2 ANY 10.0.2.15/32'\'' '\''hostname=ANY, netmask=10.0.2.15/32'\'' True 00:27:39.864 '\''/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32'\'' '\''hostname=ANZ, netmask=10.0.2.15/32'\'' True 00:27:39.864 '\''/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32'\'' '\''hostname=ANW, netmask=10.0.2.16'\'' True 00:27:39.864 '\''/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1'\'' '\''Target0'\'' True 00:27:39.864 '\''/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1'\'' '\''Target1'\'' True 00:27:39.864 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' True 00:27:39.864 '\''/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2'\'' '\''Malloc3'\'' True 00:27:39.864 '\''/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"'\'' '\''user=test3'\'' True 00:27:39.864 '\''/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2'\'' '\''user=test2'\'' True 00:27:39.864 '\''/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"'\'' '\''user=test4'\'' True 00:27:39.864 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true'\'' '\''disable_chap: True'\'' True 00:27:39.864 '\''/iscsi/global_params set_auth g=1 d=true r=false'\'' '\''disable_chap: True'\'' True 00:27:39.864 '\''/iscsi ls'\'' '\''Malloc'\'' True 00:27:39.864 ' 00:27:47.988 Executing command: ['/bdevs/malloc create 32 512 Malloc0', 'Malloc0', True] 00:27:47.988 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:47.988 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:47.988 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:47.988 Executing command: ['/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"', 'host=127.0.0.1, port=3261', True] 00:27:47.988 Executing command: ['/iscsi/portal_groups create 2 127.0.0.1:3262', 'host=127.0.0.1, port=3262', True] 00:27:47.989 Executing command: ['/iscsi/initiator_groups create 2 ANY 10.0.2.15/32', 'hostname=ANY, netmask=10.0.2.15/32', True] 00:27:47.989 Executing command: ['/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32', 'hostname=ANZ, netmask=10.0.2.15/32', True] 00:27:47.989 Executing command: ['/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32', 'hostname=ANW, netmask=10.0.2.16', True] 00:27:47.989 Executing command: ['/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1', 'Target0', True] 00:27:47.989 Executing command: ['/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1', 'Target1', True] 00:27:47.989 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', True] 00:27:47.989 Executing command: ['/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2', 'Malloc3', True] 00:27:47.989 Executing command: ['/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"', 'user=test3', True] 00:27:47.989 Executing command: ['/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2', 'user=test2', True] 00:27:47.989 Executing command: ['/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"', 'user=test4', True] 00:27:47.989 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true', 'disable_chap: True', True] 00:27:47.989 Executing command: ['/iscsi/global_params set_auth g=1 d=true r=false', 'disable_chap: True', True] 00:27:47.989 Executing command: ['/iscsi ls', 'Malloc', True] 00:27:47.989 22:29:18 spdkcli_iscsi -- spdkcli/iscsi.sh@49 -- # timing_exit spdkcli_create_iscsi_config 00:27:47.989 22:29:18 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:47.989 22:29:18 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:27:47.989 22:29:18 spdkcli_iscsi -- spdkcli/iscsi.sh@51 -- # timing_enter spdkcli_check_match 00:27:47.989 22:29:18 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:47.989 22:29:18 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:27:47.989 22:29:18 spdkcli_iscsi -- spdkcli/iscsi.sh@52 -- # check_match 00:27:47.989 22:29:18 spdkcli_iscsi -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /iscsi 00:27:47.989 22:29:19 spdkcli_iscsi -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test.match 00:27:47.989 22:29:19 spdkcli_iscsi -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test 00:27:47.989 22:29:19 spdkcli_iscsi -- spdkcli/iscsi.sh@53 -- # timing_exit spdkcli_check_match 00:27:47.989 22:29:19 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:47.989 22:29:19 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:27:47.989 22:29:19 spdkcli_iscsi -- spdkcli/iscsi.sh@55 -- # timing_enter spdkcli_clear_iscsi_config 00:27:47.989 22:29:19 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:47.989 22:29:19 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:27:47.989 22:29:19 spdkcli_iscsi -- spdkcli/iscsi.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/iscsi/auth_groups delete_secret 1 test2'\'' '\''user=test2'\'' 00:27:47.989 '\''/iscsi/auth_groups delete_secret_all 1'\'' '\''user=test1'\'' 00:27:47.989 '\''/iscsi/auth_groups delete 1'\'' '\''user=test1'\'' 00:27:47.989 '\''/iscsi/auth_groups delete_all'\'' '\''user=test4'\'' 00:27:47.989 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' 00:27:47.989 '\''/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1'\'' '\''Target1'\'' 00:27:47.989 '\''/iscsi/target_nodes delete_all'\'' '\''Target0'\'' 00:27:47.989 '\''/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32'\'' '\''ANW'\'' 00:27:47.989 '\''/iscsi/initiator_groups delete 3'\'' '\''ANZ'\'' 00:27:47.989 '\''/iscsi/initiator_groups delete_all'\'' '\''ANY'\'' 00:27:47.989 '\''/iscsi/portal_groups delete 1'\'' '\''127.0.0.1:3261'\'' 00:27:47.989 '\''/iscsi/portal_groups delete_all'\'' '\''127.0.0.1:3262'\'' 00:27:47.989 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:47.989 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:47.989 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:47.989 '\''/bdevs/malloc delete Malloc0'\'' '\''Malloc0'\'' 00:27:47.989 ' 00:27:54.554 Executing command: ['/iscsi/auth_groups delete_secret 1 test2', 'user=test2', False] 00:27:54.554 Executing command: ['/iscsi/auth_groups delete_secret_all 1', 'user=test1', False] 00:27:54.554 Executing command: ['/iscsi/auth_groups delete 1', 'user=test1', False] 00:27:54.554 Executing command: ['/iscsi/auth_groups delete_all', 'user=test4', False] 00:27:54.554 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', False] 00:27:54.554 Executing command: ['/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1', 'Target1', False] 00:27:54.554 Executing command: ['/iscsi/target_nodes delete_all', 'Target0', False] 00:27:54.554 Executing command: ['/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32', 'ANW', False] 00:27:54.554 Executing command: ['/iscsi/initiator_groups delete 3', 'ANZ', False] 00:27:54.554 Executing command: ['/iscsi/initiator_groups delete_all', 'ANY', False] 00:27:54.554 Executing command: ['/iscsi/portal_groups delete 1', '127.0.0.1:3261', False] 00:27:54.554 Executing command: ['/iscsi/portal_groups delete_all', '127.0.0.1:3262', False] 00:27:54.555 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:54.555 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:54.555 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:54.555 Executing command: ['/bdevs/malloc delete Malloc0', 'Malloc0', False] 00:27:54.555 22:29:25 spdkcli_iscsi -- spdkcli/iscsi.sh@73 -- # timing_exit spdkcli_clear_iscsi_config 00:27:54.555 22:29:25 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:54.555 22:29:25 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:27:54.555 22:29:25 spdkcli_iscsi -- spdkcli/iscsi.sh@75 -- # killprocess 135704 00:27:54.555 22:29:25 spdkcli_iscsi -- common/autotest_common.sh@948 -- # '[' -z 135704 ']' 00:27:54.555 22:29:25 spdkcli_iscsi -- common/autotest_common.sh@952 -- # kill -0 135704 00:27:54.555 22:29:25 spdkcli_iscsi -- common/autotest_common.sh@953 -- # uname 00:27:54.555 22:29:25 spdkcli_iscsi -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:54.555 22:29:25 spdkcli_iscsi -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 135704 00:27:54.555 22:29:25 spdkcli_iscsi -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:54.555 22:29:25 spdkcli_iscsi -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:54.555 killing process with pid 135704 00:27:54.555 22:29:25 spdkcli_iscsi -- common/autotest_common.sh@966 -- # echo 'killing process with pid 135704' 00:27:54.555 22:29:25 spdkcli_iscsi -- common/autotest_common.sh@967 -- # kill 135704 00:27:54.555 22:29:25 spdkcli_iscsi -- common/autotest_common.sh@972 -- # wait 135704 00:27:54.555 22:29:26 spdkcli_iscsi -- spdkcli/iscsi.sh@1 -- # cleanup 00:27:54.555 22:29:26 spdkcli_iscsi -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:54.555 22:29:26 spdkcli_iscsi -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:27:54.555 22:29:26 spdkcli_iscsi -- spdkcli/common.sh@16 -- # '[' -n 135704 ']' 00:27:54.555 22:29:26 spdkcli_iscsi -- spdkcli/common.sh@17 -- # killprocess 135704 00:27:54.555 22:29:26 spdkcli_iscsi -- common/autotest_common.sh@948 -- # '[' -z 135704 ']' 00:27:54.555 22:29:26 spdkcli_iscsi -- common/autotest_common.sh@952 -- # kill -0 135704 00:27:54.555 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (135704) - No such process 00:27:54.555 Process with pid 135704 is not found 00:27:54.555 22:29:26 spdkcli_iscsi -- common/autotest_common.sh@975 -- # echo 'Process with pid 135704 is not found' 00:27:54.555 22:29:26 spdkcli_iscsi -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:54.555 22:29:26 spdkcli_iscsi -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_iscsi.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:54.555 00:27:54.555 real 0m16.229s 00:27:54.555 user 0m34.556s 00:27:54.555 sys 0m1.241s 00:27:54.555 22:29:26 spdkcli_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:54.555 22:29:26 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:27:54.555 ************************************ 00:27:54.555 END TEST spdkcli_iscsi 00:27:54.555 ************************************ 00:27:54.555 22:29:26 -- spdk/autotest.sh@267 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:27:54.555 22:29:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:54.555 22:29:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:54.555 22:29:26 -- common/autotest_common.sh@10 -- # set +x 00:27:54.555 ************************************ 00:27:54.555 START TEST spdkcli_raid 00:27:54.555 ************************************ 00:27:54.555 22:29:26 spdkcli_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:27:54.555 * Looking for test storage... 00:27:54.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:27:54.555 22:29:26 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:27:54.555 22:29:26 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:54.555 22:29:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=136001 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 136001 00:27:54.555 22:29:26 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:27:54.555 22:29:26 spdkcli_raid -- common/autotest_common.sh@829 -- # '[' -z 136001 ']' 00:27:54.555 22:29:26 spdkcli_raid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.555 22:29:26 spdkcli_raid -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:54.555 22:29:26 spdkcli_raid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.555 22:29:26 spdkcli_raid -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:54.555 22:29:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:54.555 [2024-07-23 22:29:26.368368] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc2 initialization... 00:27:54.555 [2024-07-23 22:29:26.368459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136001 ] 00:27:54.555 [2024-07-23 22:29:26.495020] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:54.555 [2024-07-23 22:29:26.510585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:54.555 [2024-07-23 22:29:26.553706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.555 [2024-07-23 22:29:26.553735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.555 [2024-07-23 22:29:26.594974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:55.122 22:29:27 spdkcli_raid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:55.122 22:29:27 spdkcli_raid -- common/autotest_common.sh@862 -- # return 0 00:27:55.122 22:29:27 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:27:55.122 22:29:27 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:55.122 22:29:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:55.122 22:29:27 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:27:55.122 22:29:27 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:55.122 22:29:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:55.122 22:29:27 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:55.122 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:55.122 ' 00:27:57.026 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:27:57.026 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:27:57.026 22:29:28 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:27:57.026 22:29:28 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.026 22:29:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:57.026 22:29:28 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:27:57.026 22:29:28 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:57.026 22:29:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:57.026 22:29:28 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:27:57.026 ' 00:27:57.964 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:27:57.964 22:29:30 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:27:57.964 22:29:30 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.964 22:29:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:57.964 22:29:30 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:27:57.964 22:29:30 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:57.964 22:29:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:57.964 22:29:30 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:27:57.964 22:29:30 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:27:58.532 22:29:30 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:27:58.532 22:29:30 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:27:58.532 22:29:30 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:27:58.532 22:29:30 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:58.532 22:29:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:58.532 22:29:30 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:27:58.532 22:29:30 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:58.532 22:29:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:58.532 22:29:30 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:27:58.532 ' 00:27:59.468 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:27:59.727 22:29:31 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:27:59.727 22:29:31 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:59.727 22:29:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:59.727 22:29:31 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:27:59.727 22:29:31 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:59.727 22:29:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:27:59.727 22:29:31 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:27:59.727 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:27:59.727 ' 00:28:01.104 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:28:01.104 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:28:01.104 22:29:33 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:28:01.104 22:29:33 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:01.104 22:29:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:01.104 22:29:33 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 136001 00:28:01.104 22:29:33 spdkcli_raid -- common/autotest_common.sh@948 -- # '[' -z 136001 ']' 00:28:01.104 22:29:33 spdkcli_raid -- common/autotest_common.sh@952 -- # kill -0 136001 00:28:01.104 22:29:33 spdkcli_raid -- common/autotest_common.sh@953 -- # uname 00:28:01.104 22:29:33 spdkcli_raid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:01.104 22:29:33 spdkcli_raid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 136001 00:28:01.104 killing process with pid 136001 00:28:01.105 22:29:33 spdkcli_raid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:01.105 22:29:33 spdkcli_raid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:01.105 22:29:33 spdkcli_raid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 136001' 00:28:01.105 22:29:33 spdkcli_raid -- common/autotest_common.sh@967 -- # kill 136001 00:28:01.105 22:29:33 spdkcli_raid -- common/autotest_common.sh@972 -- # wait 136001 00:28:01.674 22:29:33 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:28:01.674 Process with pid 136001 is not found 00:28:01.674 22:29:33 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 136001 ']' 00:28:01.674 22:29:33 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 136001 00:28:01.674 22:29:33 spdkcli_raid -- common/autotest_common.sh@948 -- # '[' -z 136001 ']' 00:28:01.674 22:29:33 spdkcli_raid -- common/autotest_common.sh@952 -- # kill -0 136001 00:28:01.674 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (136001) - No such process 00:28:01.674 22:29:33 spdkcli_raid -- common/autotest_common.sh@975 -- # echo 'Process with pid 136001 is not found' 00:28:01.674 22:29:33 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:28:01.674 22:29:33 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:01.674 22:29:33 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:01.674 22:29:33 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:01.674 00:28:01.674 real 0m7.437s 00:28:01.674 user 0m15.954s 00:28:01.674 sys 0m0.945s 00:28:01.674 22:29:33 spdkcli_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:01.674 ************************************ 00:28:01.674 END TEST spdkcli_raid 00:28:01.674 ************************************ 00:28:01.674 22:29:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:01.674 22:29:33 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:28:01.674 22:29:33 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:28:01.674 22:29:33 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:01.674 22:29:33 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:01.674 22:29:33 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:01.674 22:29:33 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:01.674 22:29:33 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:01.674 22:29:33 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:01.674 22:29:33 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:01.674 22:29:33 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:01.674 22:29:33 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:01.674 22:29:33 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:01.674 22:29:33 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:01.674 22:29:33 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:01.674 22:29:33 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:01.674 22:29:33 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:01.674 22:29:33 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:01.674 22:29:33 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:01.674 22:29:33 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:01.674 22:29:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:01.674 22:29:33 -- common/autotest_common.sh@10 -- # set +x 00:28:01.674 22:29:33 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:01.674 22:29:33 -- common/autotest_common.sh@1390 -- # local autotest_es=0 00:28:01.674 22:29:33 -- common/autotest_common.sh@1391 -- # xtrace_disable 00:28:01.674 22:29:33 -- common/autotest_common.sh@10 -- # set +x 00:28:04.211 INFO: APP EXITING 00:28:04.211 INFO: killing all VMs 00:28:04.211 INFO: killing vhost app 00:28:04.211 INFO: EXIT DONE 00:28:04.211 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:04.211 Waiting for block devices as requested 00:28:04.211 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:04.470 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:05.417 0000:00:10.0 (1b36 0010): Active devices: data@nvme1n1, so not binding PCI dev 00:28:05.417 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:05.417 Cleaning 00:28:05.417 Removing: /var/run/dpdk/spdk0/config 00:28:05.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:05.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:05.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:05.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:05.417 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:05.417 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:05.417 Removing: /var/run/dpdk/spdk1/config 00:28:05.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:05.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:05.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:05.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:05.417 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:05.417 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:05.417 Removing: /dev/shm/iscsi_trace.pid92442 00:28:05.417 Removing: /dev/shm/spdk_tgt_trace.pid72262 00:28:05.417 Removing: /var/run/dpdk/spdk0 00:28:05.417 Removing: /var/run/dpdk/spdk1 00:28:05.417 Removing: /var/run/dpdk/spdk_pid135704 00:28:05.417 Removing: /var/run/dpdk/spdk_pid136001 00:28:05.417 Removing: /var/run/dpdk/spdk_pid72117 00:28:05.417 Removing: /var/run/dpdk/spdk_pid72262 00:28:05.417 Removing: /var/run/dpdk/spdk_pid72460 00:28:05.417 Removing: /var/run/dpdk/spdk_pid72541 00:28:05.417 Removing: /var/run/dpdk/spdk_pid72574 00:28:05.417 Removing: /var/run/dpdk/spdk_pid72689 00:28:05.417 Removing: /var/run/dpdk/spdk_pid72707 00:28:05.417 Removing: /var/run/dpdk/spdk_pid72825 00:28:05.417 Removing: /var/run/dpdk/spdk_pid73001 00:28:05.417 Removing: /var/run/dpdk/spdk_pid73147 00:28:05.417 Removing: /var/run/dpdk/spdk_pid73218 00:28:05.417 Removing: /var/run/dpdk/spdk_pid73294 00:28:05.417 Removing: /var/run/dpdk/spdk_pid73385 00:28:05.418 Removing: /var/run/dpdk/spdk_pid73462 00:28:05.418 Removing: /var/run/dpdk/spdk_pid73495 00:28:05.418 Removing: /var/run/dpdk/spdk_pid73525 00:28:05.418 Removing: /var/run/dpdk/spdk_pid73591 00:28:05.418 Removing: /var/run/dpdk/spdk_pid73686 00:28:05.418 Removing: /var/run/dpdk/spdk_pid74109 00:28:05.418 Removing: /var/run/dpdk/spdk_pid74157 00:28:05.418 Removing: /var/run/dpdk/spdk_pid74203 00:28:05.418 Removing: /var/run/dpdk/spdk_pid74219 00:28:05.418 Removing: /var/run/dpdk/spdk_pid74286 00:28:05.418 Removing: /var/run/dpdk/spdk_pid74302 00:28:05.418 Removing: /var/run/dpdk/spdk_pid74371 00:28:05.418 Removing: /var/run/dpdk/spdk_pid74387 00:28:05.418 Removing: /var/run/dpdk/spdk_pid74433 00:28:05.418 Removing: /var/run/dpdk/spdk_pid74445 00:28:05.418 Removing: /var/run/dpdk/spdk_pid74492 00:28:05.418 Removing: /var/run/dpdk/spdk_pid74505 00:28:05.418 Removing: /var/run/dpdk/spdk_pid74631 00:28:05.418 Removing: /var/run/dpdk/spdk_pid74661 00:28:05.689 Removing: /var/run/dpdk/spdk_pid74736 00:28:05.689 Removing: /var/run/dpdk/spdk_pid74787 00:28:05.689 Removing: /var/run/dpdk/spdk_pid74812 00:28:05.689 Removing: /var/run/dpdk/spdk_pid74870 00:28:05.689 Removing: /var/run/dpdk/spdk_pid74905 00:28:05.689 Removing: /var/run/dpdk/spdk_pid74939 00:28:05.689 Removing: /var/run/dpdk/spdk_pid74968 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75007 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75037 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75072 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75106 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75141 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75170 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75204 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75239 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75273 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75308 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75337 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75377 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75406 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75439 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75481 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75510 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75551 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75610 00:28:05.689 Removing: /var/run/dpdk/spdk_pid75691 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76002 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76020 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76051 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76064 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76085 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76104 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76118 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76133 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76152 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76166 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76181 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76200 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76214 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76229 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76248 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76262 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76277 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76296 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76310 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76330 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76360 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76375 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76404 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76463 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76497 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76501 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76535 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76539 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76552 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76589 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76608 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76631 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76646 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76650 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76659 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76669 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76673 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76688 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76692 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76726 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76747 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76762 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76785 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76800 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76802 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76848 00:28:05.689 Removing: /var/run/dpdk/spdk_pid76854 00:28:05.949 Removing: /var/run/dpdk/spdk_pid76886 00:28:05.949 Removing: /var/run/dpdk/spdk_pid76888 00:28:05.949 Removing: /var/run/dpdk/spdk_pid76901 00:28:05.949 Removing: /var/run/dpdk/spdk_pid76903 00:28:05.949 Removing: /var/run/dpdk/spdk_pid76916 00:28:05.949 Removing: /var/run/dpdk/spdk_pid76918 00:28:05.949 Removing: /var/run/dpdk/spdk_pid76931 00:28:05.949 Removing: /var/run/dpdk/spdk_pid76933 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77007 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77044 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77143 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77181 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77222 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77237 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77259 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77273 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77305 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77326 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77390 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77412 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77445 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77509 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77554 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77583 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77677 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77725 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77752 00:28:05.949 Removing: /var/run/dpdk/spdk_pid77976 00:28:05.949 Removing: /var/run/dpdk/spdk_pid78068 00:28:05.949 Removing: /var/run/dpdk/spdk_pid78097 00:28:05.949 Removing: /var/run/dpdk/spdk_pid78363 00:28:05.949 Removing: /var/run/dpdk/spdk_pid78388 00:28:05.949 Removing: /var/run/dpdk/spdk_pid78407 00:28:05.949 Removing: /var/run/dpdk/spdk_pid78450 00:28:05.949 Removing: /var/run/dpdk/spdk_pid78461 00:28:05.949 Removing: /var/run/dpdk/spdk_pid78478 00:28:05.949 Removing: /var/run/dpdk/spdk_pid78498 00:28:05.949 Removing: /var/run/dpdk/spdk_pid78509 00:28:05.949 Removing: /var/run/dpdk/spdk_pid78554 00:28:05.949 Removing: /var/run/dpdk/spdk_pid78568 00:28:05.949 Removing: /var/run/dpdk/spdk_pid78613 00:28:05.949 Removing: /var/run/dpdk/spdk_pid78705 00:28:05.949 Removing: /var/run/dpdk/spdk_pid79455 00:28:05.949 Removing: /var/run/dpdk/spdk_pid81091 00:28:05.949 Removing: /var/run/dpdk/spdk_pid81365 00:28:05.949 Removing: /var/run/dpdk/spdk_pid81660 00:28:05.949 Removing: /var/run/dpdk/spdk_pid81895 00:28:05.949 Removing: /var/run/dpdk/spdk_pid82433 00:28:05.949 Removing: /var/run/dpdk/spdk_pid87245 00:28:05.949 Removing: /var/run/dpdk/spdk_pid91368 00:28:05.949 Removing: /var/run/dpdk/spdk_pid92107 00:28:05.949 Removing: /var/run/dpdk/spdk_pid92140 00:28:05.949 Removing: /var/run/dpdk/spdk_pid92442 00:28:05.949 Removing: /var/run/dpdk/spdk_pid93699 00:28:05.949 Removing: /var/run/dpdk/spdk_pid94069 00:28:05.949 Removing: /var/run/dpdk/spdk_pid94115 00:28:05.949 Removing: /var/run/dpdk/spdk_pid94497 00:28:05.949 Removing: /var/run/dpdk/spdk_pid96865 00:28:05.949 Clean 00:28:06.209 22:29:38 -- common/autotest_common.sh@1449 -- # return 0 00:28:06.209 22:29:38 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:06.209 22:29:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:06.209 22:29:38 -- common/autotest_common.sh@10 -- # set +x 00:28:06.209 22:29:38 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:06.209 22:29:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:06.209 22:29:38 -- common/autotest_common.sh@10 -- # set +x 00:28:06.209 22:29:38 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:06.209 22:29:38 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:06.209 22:29:38 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:06.209 22:29:38 -- spdk/autotest.sh@391 -- # hash lcov 00:28:06.209 22:29:38 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:06.209 22:29:38 -- spdk/autotest.sh@393 -- # hostname 00:28:06.209 22:29:38 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:06.468 geninfo: WARNING: invalid characters removed from testname! 00:28:28.408 22:30:00 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:31.700 22:30:03 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:33.604 22:30:05 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:36.135 22:30:07 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:38.040 22:30:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:39.945 22:30:12 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:42.480 22:30:14 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:42.480 22:30:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:42.480 22:30:14 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:42.480 22:30:14 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.480 22:30:14 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.481 22:30:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.481 22:30:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.481 22:30:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.481 22:30:14 -- paths/export.sh@5 -- $ export PATH 00:28:42.481 22:30:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.481 22:30:14 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:42.481 22:30:14 -- common/autobuild_common.sh@447 -- $ date +%s 00:28:42.481 22:30:14 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721773814.XXXXXX 00:28:42.481 22:30:14 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721773814.rRVaat 00:28:42.481 22:30:14 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:28:42.481 22:30:14 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:28:42.481 22:30:14 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:28:42.481 22:30:14 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:28:42.481 22:30:14 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:42.481 22:30:14 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:42.481 22:30:14 -- common/autobuild_common.sh@463 -- $ get_config_params 00:28:42.481 22:30:14 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:42.481 22:30:14 -- common/autotest_common.sh@10 -- $ set +x 00:28:42.481 22:30:14 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:28:42.481 22:30:14 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:28:42.481 22:30:14 -- pm/common@17 -- $ local monitor 00:28:42.481 22:30:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:42.481 22:30:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:42.481 22:30:14 -- pm/common@25 -- $ sleep 1 00:28:42.481 22:30:14 -- pm/common@21 -- $ date +%s 00:28:42.481 22:30:14 -- pm/common@21 -- $ date +%s 00:28:42.481 22:30:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721773814 00:28:42.481 22:30:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721773814 00:28:42.481 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721773814_collect-vmstat.pm.log 00:28:42.481 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721773814_collect-cpu-load.pm.log 00:28:43.420 22:30:15 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:28:43.420 22:30:15 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:43.420 22:30:15 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:43.420 22:30:15 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:43.420 22:30:15 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:43.420 22:30:15 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:43.420 22:30:15 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:43.420 22:30:15 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:43.420 22:30:15 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:43.420 22:30:15 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:43.420 22:30:15 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:43.420 22:30:15 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:43.420 22:30:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:43.420 22:30:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:43.420 22:30:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:43.420 22:30:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:28:43.420 22:30:15 -- pm/common@44 -- $ pid=137769 00:28:43.420 22:30:15 -- pm/common@50 -- $ kill -TERM 137769 00:28:43.420 22:30:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:43.420 22:30:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:28:43.420 22:30:15 -- pm/common@44 -- $ pid=137771 00:28:43.420 22:30:15 -- pm/common@50 -- $ kill -TERM 137771 00:28:43.420 + [[ -n 5909 ]] 00:28:43.420 + sudo kill 5909 00:28:43.430 [Pipeline] } 00:28:43.451 [Pipeline] // timeout 00:28:43.458 [Pipeline] } 00:28:43.476 [Pipeline] // stage 00:28:43.483 [Pipeline] } 00:28:43.502 [Pipeline] // catchError 00:28:43.513 [Pipeline] stage 00:28:43.515 [Pipeline] { (Stop VM) 00:28:43.530 [Pipeline] sh 00:28:43.815 + vagrant halt 00:28:48.041 ==> default: Halting domain... 00:28:54.619 [Pipeline] sh 00:28:54.894 + vagrant destroy -f 00:28:57.426 ==> default: Removing domain... 00:28:57.697 [Pipeline] sh 00:28:57.977 + mv output /var/jenkins/workspace/iscsi-uring-vg-autotest/output 00:28:57.986 [Pipeline] } 00:28:58.003 [Pipeline] // stage 00:28:58.009 [Pipeline] } 00:28:58.023 [Pipeline] // dir 00:28:58.028 [Pipeline] } 00:28:58.040 [Pipeline] // wrap 00:28:58.045 [Pipeline] } 00:28:58.056 [Pipeline] // catchError 00:28:58.067 [Pipeline] stage 00:28:58.069 [Pipeline] { (Epilogue) 00:28:58.084 [Pipeline] sh 00:28:58.368 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:03.651 [Pipeline] catchError 00:29:03.653 [Pipeline] { 00:29:03.667 [Pipeline] sh 00:29:03.948 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:04.207 Artifacts sizes are good 00:29:04.215 [Pipeline] } 00:29:04.231 [Pipeline] // catchError 00:29:04.243 [Pipeline] archiveArtifacts 00:29:04.249 Archiving artifacts 00:29:05.307 [Pipeline] cleanWs 00:29:05.320 [WS-CLEANUP] Deleting project workspace... 00:29:05.320 [WS-CLEANUP] Deferred wipeout is used... 00:29:05.335 [WS-CLEANUP] done 00:29:05.337 [Pipeline] } 00:29:05.356 [Pipeline] // stage 00:29:05.362 [Pipeline] } 00:29:05.379 [Pipeline] // node 00:29:05.385 [Pipeline] End of Pipeline 00:29:05.433 Finished: SUCCESS