00:00:00.001 Started by upstream project "autotest-nightly" build number 3917 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3292 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.039 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/iscsi-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.040 The recommended git tool is: git 00:00:00.040 using credential 00000000-0000-0000-0000-000000000002 00:00:00.043 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/iscsi-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.069 Fetching changes from the remote Git repository 00:00:00.071 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.104 Using shallow fetch with depth 1 00:00:00.104 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.104 > git --version # timeout=10 00:00:00.134 > git --version # 'git version 2.39.2' 00:00:00.134 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.172 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.172 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.220 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.231 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.242 Checking out Revision 456d80899d5187c68de113852b37bde1201fd33a (FETCH_HEAD) 00:00:04.242 > git config core.sparsecheckout # timeout=10 00:00:04.252 > git read-tree -mu HEAD # timeout=10 00:00:04.267 > git checkout -f 456d80899d5187c68de113852b37bde1201fd33a # timeout=5 00:00:04.291 Commit message: "jenkins/config: Drop WFP25 for maintenance" 00:00:04.291 > git rev-list --no-walk 456d80899d5187c68de113852b37bde1201fd33a # timeout=10 00:00:04.389 [Pipeline] Start of Pipeline 00:00:04.400 [Pipeline] library 00:00:04.401 Loading library shm_lib@master 00:00:04.401 Library shm_lib@master is cached. Copying from home. 00:00:04.413 [Pipeline] node 00:00:04.429 Running on VM-host-SM4 in /var/jenkins/workspace/iscsi-uring-vg-autotest 00:00:04.430 [Pipeline] { 00:00:04.437 [Pipeline] catchError 00:00:04.438 [Pipeline] { 00:00:04.446 [Pipeline] wrap 00:00:04.453 [Pipeline] { 00:00:04.458 [Pipeline] stage 00:00:04.460 [Pipeline] { (Prologue) 00:00:04.472 [Pipeline] echo 00:00:04.473 Node: VM-host-SM4 00:00:04.477 [Pipeline] cleanWs 00:00:04.488 [WS-CLEANUP] Deleting project workspace... 00:00:04.488 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.494 [WS-CLEANUP] done 00:00:04.651 [Pipeline] setCustomBuildProperty 00:00:04.731 [Pipeline] httpRequest 00:00:04.764 [Pipeline] echo 00:00:04.765 Sorcerer 10.211.164.101 is alive 00:00:04.772 [Pipeline] httpRequest 00:00:04.776 HttpMethod: GET 00:00:04.776 URL: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:04.777 Sending request to url: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:04.778 Response Code: HTTP/1.1 200 OK 00:00:04.779 Success: Status code 200 is in the accepted range: 200,404 00:00:04.779 Saving response body to /var/jenkins/workspace/iscsi-uring-vg-autotest/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:05.540 [Pipeline] sh 00:00:05.826 + tar --no-same-owner -xf jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:05.840 [Pipeline] httpRequest 00:00:05.868 [Pipeline] echo 00:00:05.870 Sorcerer 10.211.164.101 is alive 00:00:05.877 [Pipeline] httpRequest 00:00:05.880 HttpMethod: GET 00:00:05.881 URL: http://10.211.164.101/packages/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:05.881 Sending request to url: http://10.211.164.101/packages/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:05.891 Response Code: HTTP/1.1 200 OK 00:00:05.891 Success: Status code 200 is in the accepted range: 200,404 00:00:05.892 Saving response body to /var/jenkins/workspace/iscsi-uring-vg-autotest/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:59.750 [Pipeline] sh 00:01:00.038 + tar --no-same-owner -xf spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:01:02.584 [Pipeline] sh 00:01:02.867 + git -C spdk log --oneline -n5 00:01:02.867 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:01:02.867 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:01:02.867 79fce488b test/scheduler: test scheduling period with dynamic scheduler 00:01:02.867 673f37314 ut/nvme_pcie: allocate nvme_pcie_qpair instead of spdk_nvme_qpair 00:01:02.867 084afa904 util: copy errno before calling stdlib's functions 00:01:02.885 [Pipeline] writeFile 00:01:02.901 [Pipeline] sh 00:01:03.185 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:03.196 [Pipeline] sh 00:01:03.478 + cat autorun-spdk.conf 00:01:03.478 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.478 SPDK_TEST_ISCSI=1 00:01:03.478 SPDK_TEST_URING=1 00:01:03.478 SPDK_RUN_ASAN=1 00:01:03.478 SPDK_RUN_UBSAN=1 00:01:03.478 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:03.486 RUN_NIGHTLY=1 00:01:03.487 [Pipeline] } 00:01:03.503 [Pipeline] // stage 00:01:03.518 [Pipeline] stage 00:01:03.520 [Pipeline] { (Run VM) 00:01:03.535 [Pipeline] sh 00:01:03.817 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:03.817 + echo 'Start stage prepare_nvme.sh' 00:01:03.817 Start stage prepare_nvme.sh 00:01:03.817 + [[ -n 0 ]] 00:01:03.817 + disk_prefix=ex0 00:01:03.817 + [[ -n /var/jenkins/workspace/iscsi-uring-vg-autotest ]] 00:01:03.817 + [[ -e /var/jenkins/workspace/iscsi-uring-vg-autotest/autorun-spdk.conf ]] 00:01:03.817 + source /var/jenkins/workspace/iscsi-uring-vg-autotest/autorun-spdk.conf 00:01:03.817 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.817 ++ SPDK_TEST_ISCSI=1 00:01:03.817 ++ SPDK_TEST_URING=1 00:01:03.817 ++ SPDK_RUN_ASAN=1 00:01:03.817 ++ SPDK_RUN_UBSAN=1 00:01:03.817 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:03.817 ++ RUN_NIGHTLY=1 00:01:03.817 + cd /var/jenkins/workspace/iscsi-uring-vg-autotest 00:01:03.817 + nvme_files=() 00:01:03.817 + declare -A nvme_files 00:01:03.817 + backend_dir=/var/lib/libvirt/images/backends 00:01:03.817 + nvme_files['nvme.img']=5G 00:01:03.817 + nvme_files['nvme-cmb.img']=5G 00:01:03.817 + nvme_files['nvme-multi0.img']=4G 00:01:03.817 + nvme_files['nvme-multi1.img']=4G 00:01:03.817 + nvme_files['nvme-multi2.img']=4G 00:01:03.817 + nvme_files['nvme-openstack.img']=8G 00:01:03.817 + nvme_files['nvme-zns.img']=5G 00:01:03.817 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:03.817 + (( SPDK_TEST_FTL == 1 )) 00:01:03.817 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:03.817 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:03.817 + for nvme in "${!nvme_files[@]}" 00:01:03.817 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:03.817 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:03.817 + for nvme in "${!nvme_files[@]}" 00:01:03.817 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:03.817 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:03.817 + for nvme in "${!nvme_files[@]}" 00:01:03.817 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:03.817 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:03.817 + for nvme in "${!nvme_files[@]}" 00:01:03.817 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:04.077 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.077 + for nvme in "${!nvme_files[@]}" 00:01:04.077 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:04.077 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.077 + for nvme in "${!nvme_files[@]}" 00:01:04.077 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:04.077 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.077 + for nvme in "${!nvme_files[@]}" 00:01:04.077 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:04.077 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.077 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:04.077 + echo 'End stage prepare_nvme.sh' 00:01:04.077 End stage prepare_nvme.sh 00:01:04.089 [Pipeline] sh 00:01:04.371 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:04.371 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:01:04.631 00:01:04.631 DIR=/var/jenkins/workspace/iscsi-uring-vg-autotest/spdk/scripts/vagrant 00:01:04.631 SPDK_DIR=/var/jenkins/workspace/iscsi-uring-vg-autotest/spdk 00:01:04.631 VAGRANT_TARGET=/var/jenkins/workspace/iscsi-uring-vg-autotest 00:01:04.631 HELP=0 00:01:04.631 DRY_RUN=0 00:01:04.631 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:04.631 NVME_DISKS_TYPE=nvme,nvme, 00:01:04.631 NVME_AUTO_CREATE=0 00:01:04.631 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:04.631 NVME_CMB=,, 00:01:04.631 NVME_PMR=,, 00:01:04.631 NVME_ZNS=,, 00:01:04.631 NVME_MS=,, 00:01:04.631 NVME_FDP=,, 00:01:04.631 SPDK_VAGRANT_DISTRO=fedora38 00:01:04.631 SPDK_VAGRANT_VMCPU=10 00:01:04.631 SPDK_VAGRANT_VMRAM=12288 00:01:04.631 SPDK_VAGRANT_PROVIDER=libvirt 00:01:04.631 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:04.631 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:04.631 SPDK_OPENSTACK_NETWORK=0 00:01:04.631 VAGRANT_PACKAGE_BOX=0 00:01:04.631 VAGRANTFILE=/var/jenkins/workspace/iscsi-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:04.631 FORCE_DISTRO=true 00:01:04.631 VAGRANT_BOX_VERSION= 00:01:04.631 EXTRA_VAGRANTFILES= 00:01:04.631 NIC_MODEL=e1000 00:01:04.631 00:01:04.631 mkdir: created directory '/var/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt' 00:01:04.631 /var/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/iscsi-uring-vg-autotest 00:01:07.934 Bringing machine 'default' up with 'libvirt' provider... 00:01:08.503 ==> default: Creating image (snapshot of base box volume). 00:01:08.503 ==> default: Creating domain with the following settings... 00:01:08.504 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721796622_234d9ad9a32a607b0412 00:01:08.504 ==> default: -- Domain type: kvm 00:01:08.504 ==> default: -- Cpus: 10 00:01:08.504 ==> default: -- Feature: acpi 00:01:08.504 ==> default: -- Feature: apic 00:01:08.504 ==> default: -- Feature: pae 00:01:08.504 ==> default: -- Memory: 12288M 00:01:08.504 ==> default: -- Memory Backing: hugepages: 00:01:08.504 ==> default: -- Management MAC: 00:01:08.504 ==> default: -- Loader: 00:01:08.504 ==> default: -- Nvram: 00:01:08.504 ==> default: -- Base box: spdk/fedora38 00:01:08.504 ==> default: -- Storage pool: default 00:01:08.504 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721796622_234d9ad9a32a607b0412.img (20G) 00:01:08.504 ==> default: -- Volume Cache: default 00:01:08.504 ==> default: -- Kernel: 00:01:08.504 ==> default: -- Initrd: 00:01:08.504 ==> default: -- Graphics Type: vnc 00:01:08.504 ==> default: -- Graphics Port: -1 00:01:08.504 ==> default: -- Graphics IP: 127.0.0.1 00:01:08.504 ==> default: -- Graphics Password: Not defined 00:01:08.504 ==> default: -- Video Type: cirrus 00:01:08.504 ==> default: -- Video VRAM: 9216 00:01:08.504 ==> default: -- Sound Type: 00:01:08.504 ==> default: -- Keymap: en-us 00:01:08.504 ==> default: -- TPM Path: 00:01:08.504 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:08.504 ==> default: -- Command line args: 00:01:08.504 ==> default: -> value=-device, 00:01:08.504 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:08.504 ==> default: -> value=-drive, 00:01:08.504 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:08.504 ==> default: -> value=-device, 00:01:08.504 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.504 ==> default: -> value=-device, 00:01:08.504 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:08.504 ==> default: -> value=-drive, 00:01:08.504 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:08.504 ==> default: -> value=-device, 00:01:08.504 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.504 ==> default: -> value=-drive, 00:01:08.504 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:08.504 ==> default: -> value=-device, 00:01:08.504 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.504 ==> default: -> value=-drive, 00:01:08.504 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:08.504 ==> default: -> value=-device, 00:01:08.504 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.504 ==> default: Creating shared folders metadata... 00:01:08.504 ==> default: Starting domain. 00:01:11.044 ==> default: Waiting for domain to get an IP address... 00:01:29.143 ==> default: Waiting for SSH to become available... 00:01:29.143 ==> default: Configuring and enabling network interfaces... 00:01:33.335 default: SSH address: 192.168.121.15:22 00:01:33.335 default: SSH username: vagrant 00:01:33.335 default: SSH auth method: private key 00:01:35.869 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/iscsi-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:43.987 ==> default: Mounting SSHFS shared folder... 00:01:45.892 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:45.892 ==> default: Checking Mount.. 00:01:47.271 ==> default: Folder Successfully Mounted! 00:01:47.271 ==> default: Running provisioner: file... 00:01:48.209 default: ~/.gitconfig => .gitconfig 00:01:48.468 00:01:48.468 SUCCESS! 00:01:48.468 00:01:48.468 cd to /var/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:48.468 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:48.468 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:48.468 00:01:48.478 [Pipeline] } 00:01:48.495 [Pipeline] // stage 00:01:48.504 [Pipeline] dir 00:01:48.505 Running in /var/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt 00:01:48.506 [Pipeline] { 00:01:48.521 [Pipeline] catchError 00:01:48.523 [Pipeline] { 00:01:48.537 [Pipeline] sh 00:01:48.818 + vagrant ssh-config --host vagrant 00:01:48.819 + sed -ne /^Host/,$p 00:01:48.819 + tee ssh_conf 00:01:52.102 Host vagrant 00:01:52.102 HostName 192.168.121.15 00:01:52.102 User vagrant 00:01:52.102 Port 22 00:01:52.102 UserKnownHostsFile /dev/null 00:01:52.102 StrictHostKeyChecking no 00:01:52.102 PasswordAuthentication no 00:01:52.102 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:52.102 IdentitiesOnly yes 00:01:52.102 LogLevel FATAL 00:01:52.102 ForwardAgent yes 00:01:52.102 ForwardX11 yes 00:01:52.102 00:01:52.118 [Pipeline] withEnv 00:01:52.120 [Pipeline] { 00:01:52.135 [Pipeline] sh 00:01:52.442 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:52.442 source /etc/os-release 00:01:52.442 [[ -e /image.version ]] && img=$(< /image.version) 00:01:52.442 # Minimal, systemd-like check. 00:01:52.442 if [[ -e /.dockerenv ]]; then 00:01:52.442 # Clear garbage from the node's name: 00:01:52.442 # agt-er_autotest_547-896 -> autotest_547-896 00:01:52.442 # $HOSTNAME is the actual container id 00:01:52.442 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:52.442 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:52.442 # We can assume this is a mount from a host where container is running, 00:01:52.442 # so fetch its hostname to easily identify the target swarm worker. 00:01:52.442 container="$(< /etc/hostname) ($agent)" 00:01:52.442 else 00:01:52.442 # Fallback 00:01:52.442 container=$agent 00:01:52.442 fi 00:01:52.442 fi 00:01:52.442 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:52.442 00:01:52.712 [Pipeline] } 00:01:52.732 [Pipeline] // withEnv 00:01:52.739 [Pipeline] setCustomBuildProperty 00:01:52.754 [Pipeline] stage 00:01:52.757 [Pipeline] { (Tests) 00:01:52.774 [Pipeline] sh 00:01:53.055 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:53.327 [Pipeline] sh 00:01:53.607 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:53.880 [Pipeline] timeout 00:01:53.881 Timeout set to expire in 45 min 00:01:53.883 [Pipeline] { 00:01:53.898 [Pipeline] sh 00:01:54.179 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:54.748 HEAD is now at 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:01:54.760 [Pipeline] sh 00:01:55.043 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:55.366 [Pipeline] sh 00:01:55.649 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:55.925 [Pipeline] sh 00:01:56.204 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=iscsi-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:56.464 ++ readlink -f spdk_repo 00:01:56.464 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:56.464 + [[ -n /home/vagrant/spdk_repo ]] 00:01:56.464 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:56.464 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:56.464 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:56.464 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:56.464 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:56.464 + [[ iscsi-uring-vg-autotest == pkgdep-* ]] 00:01:56.464 + cd /home/vagrant/spdk_repo 00:01:56.464 + source /etc/os-release 00:01:56.464 ++ NAME='Fedora Linux' 00:01:56.464 ++ VERSION='38 (Cloud Edition)' 00:01:56.464 ++ ID=fedora 00:01:56.464 ++ VERSION_ID=38 00:01:56.464 ++ VERSION_CODENAME= 00:01:56.464 ++ PLATFORM_ID=platform:f38 00:01:56.464 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:56.464 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:56.464 ++ LOGO=fedora-logo-icon 00:01:56.464 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:56.464 ++ HOME_URL=https://fedoraproject.org/ 00:01:56.464 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:56.464 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:56.464 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:56.464 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:56.464 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:56.464 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:56.464 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:56.464 ++ SUPPORT_END=2024-05-14 00:01:56.464 ++ VARIANT='Cloud Edition' 00:01:56.464 ++ VARIANT_ID=cloud 00:01:56.464 + uname -a 00:01:56.464 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:56.464 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:57.034 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:57.034 Hugepages 00:01:57.034 node hugesize free / total 00:01:57.034 node0 1048576kB 0 / 0 00:01:57.034 node0 2048kB 0 / 0 00:01:57.034 00:01:57.034 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:57.034 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:57.034 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:57.034 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:57.034 + rm -f /tmp/spdk-ld-path 00:01:57.034 + source autorun-spdk.conf 00:01:57.034 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.034 ++ SPDK_TEST_ISCSI=1 00:01:57.034 ++ SPDK_TEST_URING=1 00:01:57.034 ++ SPDK_RUN_ASAN=1 00:01:57.034 ++ SPDK_RUN_UBSAN=1 00:01:57.034 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.034 ++ RUN_NIGHTLY=1 00:01:57.034 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:57.034 + [[ -n '' ]] 00:01:57.034 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:57.034 + for M in /var/spdk/build-*-manifest.txt 00:01:57.034 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:57.034 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:57.034 + for M in /var/spdk/build-*-manifest.txt 00:01:57.034 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:57.034 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:57.034 ++ uname 00:01:57.034 + [[ Linux == \L\i\n\u\x ]] 00:01:57.034 + sudo dmesg -T 00:01:57.034 + sudo dmesg --clear 00:01:57.294 + dmesg_pid=5157 00:01:57.294 + sudo dmesg -Tw 00:01:57.294 + [[ Fedora Linux == FreeBSD ]] 00:01:57.294 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.294 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.294 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:57.294 + [[ -x /usr/src/fio-static/fio ]] 00:01:57.294 + export FIO_BIN=/usr/src/fio-static/fio 00:01:57.294 + FIO_BIN=/usr/src/fio-static/fio 00:01:57.294 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:57.294 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:57.294 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:57.294 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.294 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.294 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:57.294 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.294 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.294 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:57.294 Test configuration: 00:01:57.294 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.294 SPDK_TEST_ISCSI=1 00:01:57.294 SPDK_TEST_URING=1 00:01:57.294 SPDK_RUN_ASAN=1 00:01:57.294 SPDK_RUN_UBSAN=1 00:01:57.294 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.294 RUN_NIGHTLY=1 04:51:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:57.294 04:51:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:57.294 04:51:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.294 04:51:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.294 04:51:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.294 04:51:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.294 04:51:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.294 04:51:11 -- paths/export.sh@5 -- $ export PATH 00:01:57.294 04:51:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.294 04:51:11 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:57.294 04:51:11 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:57.294 04:51:11 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721796671.XXXXXX 00:01:57.294 04:51:11 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721796671.DNH3Dq 00:01:57.294 04:51:11 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:57.294 04:51:11 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:57.294 04:51:11 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:57.294 04:51:11 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:57.294 04:51:11 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:57.294 04:51:11 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:57.294 04:51:11 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:57.294 04:51:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.294 04:51:11 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-uring' 00:01:57.294 04:51:11 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:57.294 04:51:11 -- pm/common@17 -- $ local monitor 00:01:57.294 04:51:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.294 04:51:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.294 04:51:11 -- pm/common@25 -- $ sleep 1 00:01:57.294 04:51:11 -- pm/common@21 -- $ date +%s 00:01:57.294 04:51:11 -- pm/common@21 -- $ date +%s 00:01:57.294 04:51:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721796671 00:01:57.294 04:51:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721796671 00:01:57.294 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721796671_collect-vmstat.pm.log 00:01:57.294 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721796671_collect-cpu-load.pm.log 00:01:58.232 04:51:12 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:58.232 04:51:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:58.232 04:51:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:58.232 04:51:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:58.232 04:51:12 -- spdk/autobuild.sh@16 -- $ date -u 00:01:58.232 Wed Jul 24 04:51:12 AM UTC 2024 00:01:58.232 04:51:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:58.232 v24.09-pre-309-g78cbcfdde 00:01:58.232 04:51:12 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:58.232 04:51:12 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:58.232 04:51:12 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:58.232 04:51:12 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:58.232 04:51:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.492 ************************************ 00:01:58.492 START TEST asan 00:01:58.492 ************************************ 00:01:58.492 using asan 00:01:58.492 ************************************ 00:01:58.492 END TEST asan 00:01:58.492 ************************************ 00:01:58.492 04:51:12 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:58.492 00:01:58.492 real 0m0.000s 00:01:58.492 user 0m0.000s 00:01:58.492 sys 0m0.000s 00:01:58.492 04:51:12 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:58.492 04:51:12 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:58.492 04:51:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:58.492 04:51:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:58.492 04:51:12 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:58.492 04:51:12 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:58.492 04:51:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.492 ************************************ 00:01:58.492 START TEST ubsan 00:01:58.492 ************************************ 00:01:58.492 using ubsan 00:01:58.492 04:51:12 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:58.492 00:01:58.492 real 0m0.000s 00:01:58.492 user 0m0.000s 00:01:58.492 sys 0m0.000s 00:01:58.492 04:51:12 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:58.492 ************************************ 00:01:58.492 END TEST ubsan 00:01:58.492 ************************************ 00:01:58.492 04:51:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:58.492 04:51:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:58.492 04:51:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:58.492 04:51:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:58.492 04:51:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:58.492 04:51:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:58.492 04:51:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:58.492 04:51:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:58.492 04:51:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:58.492 04:51:12 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-uring --with-shared 00:01:58.492 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:58.492 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:59.061 Using 'verbs' RDMA provider 00:02:15.351 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:30.231 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:30.231 Creating mk/config.mk...done. 00:02:30.231 Creating mk/cc.flags.mk...done. 00:02:30.231 Type 'make' to build. 00:02:30.231 04:51:43 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:30.231 04:51:43 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:30.231 04:51:43 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:30.231 04:51:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.231 ************************************ 00:02:30.231 START TEST make 00:02:30.231 ************************************ 00:02:30.231 04:51:43 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:30.231 make[1]: Nothing to be done for 'all'. 00:02:38.345 The Meson build system 00:02:38.346 Version: 1.3.1 00:02:38.346 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:38.346 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:38.346 Build type: native build 00:02:38.346 Program cat found: YES (/usr/bin/cat) 00:02:38.346 Project name: DPDK 00:02:38.346 Project version: 24.03.0 00:02:38.346 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:38.346 C linker for the host machine: cc ld.bfd 2.39-16 00:02:38.346 Host machine cpu family: x86_64 00:02:38.346 Host machine cpu: x86_64 00:02:38.346 Message: ## Building in Developer Mode ## 00:02:38.346 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:38.346 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:38.346 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:38.346 Program python3 found: YES (/usr/bin/python3) 00:02:38.346 Program cat found: YES (/usr/bin/cat) 00:02:38.346 Compiler for C supports arguments -march=native: YES 00:02:38.346 Checking for size of "void *" : 8 00:02:38.346 Checking for size of "void *" : 8 (cached) 00:02:38.346 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:38.346 Library m found: YES 00:02:38.346 Library numa found: YES 00:02:38.346 Has header "numaif.h" : YES 00:02:38.346 Library fdt found: NO 00:02:38.346 Library execinfo found: NO 00:02:38.346 Has header "execinfo.h" : YES 00:02:38.346 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:38.346 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:38.346 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:38.346 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:38.346 Run-time dependency openssl found: YES 3.0.9 00:02:38.346 Run-time dependency libpcap found: YES 1.10.4 00:02:38.346 Has header "pcap.h" with dependency libpcap: YES 00:02:38.346 Compiler for C supports arguments -Wcast-qual: YES 00:02:38.346 Compiler for C supports arguments -Wdeprecated: YES 00:02:38.346 Compiler for C supports arguments -Wformat: YES 00:02:38.346 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:38.346 Compiler for C supports arguments -Wformat-security: NO 00:02:38.346 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:38.346 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:38.346 Compiler for C supports arguments -Wnested-externs: YES 00:02:38.346 Compiler for C supports arguments -Wold-style-definition: YES 00:02:38.346 Compiler for C supports arguments -Wpointer-arith: YES 00:02:38.346 Compiler for C supports arguments -Wsign-compare: YES 00:02:38.346 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:38.346 Compiler for C supports arguments -Wundef: YES 00:02:38.346 Compiler for C supports arguments -Wwrite-strings: YES 00:02:38.346 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:38.346 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:38.346 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:38.346 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:38.346 Program objdump found: YES (/usr/bin/objdump) 00:02:38.346 Compiler for C supports arguments -mavx512f: YES 00:02:38.346 Checking if "AVX512 checking" compiles: YES 00:02:38.346 Fetching value of define "__SSE4_2__" : 1 00:02:38.346 Fetching value of define "__AES__" : 1 00:02:38.346 Fetching value of define "__AVX__" : 1 00:02:38.346 Fetching value of define "__AVX2__" : 1 00:02:38.346 Fetching value of define "__AVX512BW__" : 1 00:02:38.346 Fetching value of define "__AVX512CD__" : 1 00:02:38.346 Fetching value of define "__AVX512DQ__" : 1 00:02:38.346 Fetching value of define "__AVX512F__" : 1 00:02:38.346 Fetching value of define "__AVX512VL__" : 1 00:02:38.346 Fetching value of define "__PCLMUL__" : 1 00:02:38.346 Fetching value of define "__RDRND__" : 1 00:02:38.346 Fetching value of define "__RDSEED__" : 1 00:02:38.346 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:38.346 Fetching value of define "__znver1__" : (undefined) 00:02:38.346 Fetching value of define "__znver2__" : (undefined) 00:02:38.346 Fetching value of define "__znver3__" : (undefined) 00:02:38.346 Fetching value of define "__znver4__" : (undefined) 00:02:38.346 Library asan found: YES 00:02:38.346 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:38.346 Message: lib/log: Defining dependency "log" 00:02:38.346 Message: lib/kvargs: Defining dependency "kvargs" 00:02:38.346 Message: lib/telemetry: Defining dependency "telemetry" 00:02:38.346 Library rt found: YES 00:02:38.346 Checking for function "getentropy" : NO 00:02:38.346 Message: lib/eal: Defining dependency "eal" 00:02:38.346 Message: lib/ring: Defining dependency "ring" 00:02:38.346 Message: lib/rcu: Defining dependency "rcu" 00:02:38.346 Message: lib/mempool: Defining dependency "mempool" 00:02:38.346 Message: lib/mbuf: Defining dependency "mbuf" 00:02:38.346 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:38.346 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:38.346 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:38.346 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:38.346 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:38.346 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:38.346 Compiler for C supports arguments -mpclmul: YES 00:02:38.346 Compiler for C supports arguments -maes: YES 00:02:38.346 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:38.346 Compiler for C supports arguments -mavx512bw: YES 00:02:38.346 Compiler for C supports arguments -mavx512dq: YES 00:02:38.346 Compiler for C supports arguments -mavx512vl: YES 00:02:38.346 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:38.346 Compiler for C supports arguments -mavx2: YES 00:02:38.346 Compiler for C supports arguments -mavx: YES 00:02:38.346 Message: lib/net: Defining dependency "net" 00:02:38.346 Message: lib/meter: Defining dependency "meter" 00:02:38.346 Message: lib/ethdev: Defining dependency "ethdev" 00:02:38.346 Message: lib/pci: Defining dependency "pci" 00:02:38.346 Message: lib/cmdline: Defining dependency "cmdline" 00:02:38.346 Message: lib/hash: Defining dependency "hash" 00:02:38.346 Message: lib/timer: Defining dependency "timer" 00:02:38.346 Message: lib/compressdev: Defining dependency "compressdev" 00:02:38.346 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:38.346 Message: lib/dmadev: Defining dependency "dmadev" 00:02:38.346 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:38.346 Message: lib/power: Defining dependency "power" 00:02:38.346 Message: lib/reorder: Defining dependency "reorder" 00:02:38.346 Message: lib/security: Defining dependency "security" 00:02:38.346 Has header "linux/userfaultfd.h" : YES 00:02:38.346 Has header "linux/vduse.h" : YES 00:02:38.346 Message: lib/vhost: Defining dependency "vhost" 00:02:38.346 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:38.346 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:38.346 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:38.346 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:38.346 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:38.346 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:38.346 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:38.346 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:38.346 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:38.346 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:38.346 Program doxygen found: YES (/usr/bin/doxygen) 00:02:38.346 Configuring doxy-api-html.conf using configuration 00:02:38.346 Configuring doxy-api-man.conf using configuration 00:02:38.346 Program mandb found: YES (/usr/bin/mandb) 00:02:38.346 Program sphinx-build found: NO 00:02:38.346 Configuring rte_build_config.h using configuration 00:02:38.346 Message: 00:02:38.346 ================= 00:02:38.346 Applications Enabled 00:02:38.346 ================= 00:02:38.346 00:02:38.346 apps: 00:02:38.346 00:02:38.346 00:02:38.346 Message: 00:02:38.346 ================= 00:02:38.346 Libraries Enabled 00:02:38.346 ================= 00:02:38.346 00:02:38.346 libs: 00:02:38.346 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:38.346 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:38.346 cryptodev, dmadev, power, reorder, security, vhost, 00:02:38.346 00:02:38.346 Message: 00:02:38.346 =============== 00:02:38.346 Drivers Enabled 00:02:38.346 =============== 00:02:38.346 00:02:38.346 common: 00:02:38.346 00:02:38.346 bus: 00:02:38.346 pci, vdev, 00:02:38.346 mempool: 00:02:38.346 ring, 00:02:38.346 dma: 00:02:38.346 00:02:38.346 net: 00:02:38.346 00:02:38.346 crypto: 00:02:38.346 00:02:38.346 compress: 00:02:38.346 00:02:38.346 vdpa: 00:02:38.346 00:02:38.346 00:02:38.346 Message: 00:02:38.346 ================= 00:02:38.346 Content Skipped 00:02:38.346 ================= 00:02:38.346 00:02:38.347 apps: 00:02:38.347 dumpcap: explicitly disabled via build config 00:02:38.347 graph: explicitly disabled via build config 00:02:38.347 pdump: explicitly disabled via build config 00:02:38.347 proc-info: explicitly disabled via build config 00:02:38.347 test-acl: explicitly disabled via build config 00:02:38.347 test-bbdev: explicitly disabled via build config 00:02:38.347 test-cmdline: explicitly disabled via build config 00:02:38.347 test-compress-perf: explicitly disabled via build config 00:02:38.347 test-crypto-perf: explicitly disabled via build config 00:02:38.347 test-dma-perf: explicitly disabled via build config 00:02:38.347 test-eventdev: explicitly disabled via build config 00:02:38.347 test-fib: explicitly disabled via build config 00:02:38.347 test-flow-perf: explicitly disabled via build config 00:02:38.347 test-gpudev: explicitly disabled via build config 00:02:38.347 test-mldev: explicitly disabled via build config 00:02:38.347 test-pipeline: explicitly disabled via build config 00:02:38.347 test-pmd: explicitly disabled via build config 00:02:38.347 test-regex: explicitly disabled via build config 00:02:38.347 test-sad: explicitly disabled via build config 00:02:38.347 test-security-perf: explicitly disabled via build config 00:02:38.347 00:02:38.347 libs: 00:02:38.347 argparse: explicitly disabled via build config 00:02:38.347 metrics: explicitly disabled via build config 00:02:38.347 acl: explicitly disabled via build config 00:02:38.347 bbdev: explicitly disabled via build config 00:02:38.347 bitratestats: explicitly disabled via build config 00:02:38.347 bpf: explicitly disabled via build config 00:02:38.347 cfgfile: explicitly disabled via build config 00:02:38.347 distributor: explicitly disabled via build config 00:02:38.347 efd: explicitly disabled via build config 00:02:38.347 eventdev: explicitly disabled via build config 00:02:38.347 dispatcher: explicitly disabled via build config 00:02:38.347 gpudev: explicitly disabled via build config 00:02:38.347 gro: explicitly disabled via build config 00:02:38.347 gso: explicitly disabled via build config 00:02:38.347 ip_frag: explicitly disabled via build config 00:02:38.347 jobstats: explicitly disabled via build config 00:02:38.347 latencystats: explicitly disabled via build config 00:02:38.347 lpm: explicitly disabled via build config 00:02:38.347 member: explicitly disabled via build config 00:02:38.347 pcapng: explicitly disabled via build config 00:02:38.347 rawdev: explicitly disabled via build config 00:02:38.347 regexdev: explicitly disabled via build config 00:02:38.347 mldev: explicitly disabled via build config 00:02:38.347 rib: explicitly disabled via build config 00:02:38.347 sched: explicitly disabled via build config 00:02:38.347 stack: explicitly disabled via build config 00:02:38.347 ipsec: explicitly disabled via build config 00:02:38.347 pdcp: explicitly disabled via build config 00:02:38.347 fib: explicitly disabled via build config 00:02:38.347 port: explicitly disabled via build config 00:02:38.347 pdump: explicitly disabled via build config 00:02:38.347 table: explicitly disabled via build config 00:02:38.347 pipeline: explicitly disabled via build config 00:02:38.347 graph: explicitly disabled via build config 00:02:38.347 node: explicitly disabled via build config 00:02:38.347 00:02:38.347 drivers: 00:02:38.347 common/cpt: not in enabled drivers build config 00:02:38.347 common/dpaax: not in enabled drivers build config 00:02:38.347 common/iavf: not in enabled drivers build config 00:02:38.347 common/idpf: not in enabled drivers build config 00:02:38.347 common/ionic: not in enabled drivers build config 00:02:38.347 common/mvep: not in enabled drivers build config 00:02:38.347 common/octeontx: not in enabled drivers build config 00:02:38.347 bus/auxiliary: not in enabled drivers build config 00:02:38.347 bus/cdx: not in enabled drivers build config 00:02:38.347 bus/dpaa: not in enabled drivers build config 00:02:38.347 bus/fslmc: not in enabled drivers build config 00:02:38.347 bus/ifpga: not in enabled drivers build config 00:02:38.347 bus/platform: not in enabled drivers build config 00:02:38.347 bus/uacce: not in enabled drivers build config 00:02:38.347 bus/vmbus: not in enabled drivers build config 00:02:38.347 common/cnxk: not in enabled drivers build config 00:02:38.347 common/mlx5: not in enabled drivers build config 00:02:38.347 common/nfp: not in enabled drivers build config 00:02:38.347 common/nitrox: not in enabled drivers build config 00:02:38.347 common/qat: not in enabled drivers build config 00:02:38.347 common/sfc_efx: not in enabled drivers build config 00:02:38.347 mempool/bucket: not in enabled drivers build config 00:02:38.347 mempool/cnxk: not in enabled drivers build config 00:02:38.347 mempool/dpaa: not in enabled drivers build config 00:02:38.347 mempool/dpaa2: not in enabled drivers build config 00:02:38.347 mempool/octeontx: not in enabled drivers build config 00:02:38.347 mempool/stack: not in enabled drivers build config 00:02:38.347 dma/cnxk: not in enabled drivers build config 00:02:38.347 dma/dpaa: not in enabled drivers build config 00:02:38.347 dma/dpaa2: not in enabled drivers build config 00:02:38.347 dma/hisilicon: not in enabled drivers build config 00:02:38.347 dma/idxd: not in enabled drivers build config 00:02:38.347 dma/ioat: not in enabled drivers build config 00:02:38.347 dma/skeleton: not in enabled drivers build config 00:02:38.347 net/af_packet: not in enabled drivers build config 00:02:38.347 net/af_xdp: not in enabled drivers build config 00:02:38.347 net/ark: not in enabled drivers build config 00:02:38.347 net/atlantic: not in enabled drivers build config 00:02:38.347 net/avp: not in enabled drivers build config 00:02:38.347 net/axgbe: not in enabled drivers build config 00:02:38.347 net/bnx2x: not in enabled drivers build config 00:02:38.347 net/bnxt: not in enabled drivers build config 00:02:38.347 net/bonding: not in enabled drivers build config 00:02:38.347 net/cnxk: not in enabled drivers build config 00:02:38.347 net/cpfl: not in enabled drivers build config 00:02:38.347 net/cxgbe: not in enabled drivers build config 00:02:38.347 net/dpaa: not in enabled drivers build config 00:02:38.347 net/dpaa2: not in enabled drivers build config 00:02:38.347 net/e1000: not in enabled drivers build config 00:02:38.347 net/ena: not in enabled drivers build config 00:02:38.347 net/enetc: not in enabled drivers build config 00:02:38.347 net/enetfec: not in enabled drivers build config 00:02:38.347 net/enic: not in enabled drivers build config 00:02:38.347 net/failsafe: not in enabled drivers build config 00:02:38.347 net/fm10k: not in enabled drivers build config 00:02:38.347 net/gve: not in enabled drivers build config 00:02:38.347 net/hinic: not in enabled drivers build config 00:02:38.347 net/hns3: not in enabled drivers build config 00:02:38.347 net/i40e: not in enabled drivers build config 00:02:38.347 net/iavf: not in enabled drivers build config 00:02:38.347 net/ice: not in enabled drivers build config 00:02:38.347 net/idpf: not in enabled drivers build config 00:02:38.347 net/igc: not in enabled drivers build config 00:02:38.347 net/ionic: not in enabled drivers build config 00:02:38.347 net/ipn3ke: not in enabled drivers build config 00:02:38.347 net/ixgbe: not in enabled drivers build config 00:02:38.347 net/mana: not in enabled drivers build config 00:02:38.347 net/memif: not in enabled drivers build config 00:02:38.347 net/mlx4: not in enabled drivers build config 00:02:38.347 net/mlx5: not in enabled drivers build config 00:02:38.347 net/mvneta: not in enabled drivers build config 00:02:38.347 net/mvpp2: not in enabled drivers build config 00:02:38.347 net/netvsc: not in enabled drivers build config 00:02:38.347 net/nfb: not in enabled drivers build config 00:02:38.347 net/nfp: not in enabled drivers build config 00:02:38.347 net/ngbe: not in enabled drivers build config 00:02:38.347 net/null: not in enabled drivers build config 00:02:38.347 net/octeontx: not in enabled drivers build config 00:02:38.347 net/octeon_ep: not in enabled drivers build config 00:02:38.347 net/pcap: not in enabled drivers build config 00:02:38.347 net/pfe: not in enabled drivers build config 00:02:38.347 net/qede: not in enabled drivers build config 00:02:38.347 net/ring: not in enabled drivers build config 00:02:38.347 net/sfc: not in enabled drivers build config 00:02:38.347 net/softnic: not in enabled drivers build config 00:02:38.347 net/tap: not in enabled drivers build config 00:02:38.347 net/thunderx: not in enabled drivers build config 00:02:38.347 net/txgbe: not in enabled drivers build config 00:02:38.347 net/vdev_netvsc: not in enabled drivers build config 00:02:38.347 net/vhost: not in enabled drivers build config 00:02:38.347 net/virtio: not in enabled drivers build config 00:02:38.347 net/vmxnet3: not in enabled drivers build config 00:02:38.347 raw/*: missing internal dependency, "rawdev" 00:02:38.347 crypto/armv8: not in enabled drivers build config 00:02:38.347 crypto/bcmfs: not in enabled drivers build config 00:02:38.347 crypto/caam_jr: not in enabled drivers build config 00:02:38.347 crypto/ccp: not in enabled drivers build config 00:02:38.347 crypto/cnxk: not in enabled drivers build config 00:02:38.347 crypto/dpaa_sec: not in enabled drivers build config 00:02:38.347 crypto/dpaa2_sec: not in enabled drivers build config 00:02:38.347 crypto/ipsec_mb: not in enabled drivers build config 00:02:38.347 crypto/mlx5: not in enabled drivers build config 00:02:38.347 crypto/mvsam: not in enabled drivers build config 00:02:38.347 crypto/nitrox: not in enabled drivers build config 00:02:38.347 crypto/null: not in enabled drivers build config 00:02:38.347 crypto/octeontx: not in enabled drivers build config 00:02:38.347 crypto/openssl: not in enabled drivers build config 00:02:38.347 crypto/scheduler: not in enabled drivers build config 00:02:38.347 crypto/uadk: not in enabled drivers build config 00:02:38.347 crypto/virtio: not in enabled drivers build config 00:02:38.347 compress/isal: not in enabled drivers build config 00:02:38.347 compress/mlx5: not in enabled drivers build config 00:02:38.347 compress/nitrox: not in enabled drivers build config 00:02:38.347 compress/octeontx: not in enabled drivers build config 00:02:38.347 compress/zlib: not in enabled drivers build config 00:02:38.347 regex/*: missing internal dependency, "regexdev" 00:02:38.347 ml/*: missing internal dependency, "mldev" 00:02:38.347 vdpa/ifc: not in enabled drivers build config 00:02:38.347 vdpa/mlx5: not in enabled drivers build config 00:02:38.347 vdpa/nfp: not in enabled drivers build config 00:02:38.347 vdpa/sfc: not in enabled drivers build config 00:02:38.347 event/*: missing internal dependency, "eventdev" 00:02:38.347 baseband/*: missing internal dependency, "bbdev" 00:02:38.348 gpu/*: missing internal dependency, "gpudev" 00:02:38.348 00:02:38.348 00:02:38.348 Build targets in project: 85 00:02:38.348 00:02:38.348 DPDK 24.03.0 00:02:38.348 00:02:38.348 User defined options 00:02:38.348 buildtype : debug 00:02:38.348 default_library : shared 00:02:38.348 libdir : lib 00:02:38.348 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:38.348 b_sanitize : address 00:02:38.348 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:38.348 c_link_args : 00:02:38.348 cpu_instruction_set: native 00:02:38.348 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:38.348 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:38.348 enable_docs : false 00:02:38.348 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:38.348 enable_kmods : false 00:02:38.348 max_lcores : 128 00:02:38.348 tests : false 00:02:38.348 00:02:38.348 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.915 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:38.915 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:38.915 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:38.915 [3/268] Linking static target lib/librte_kvargs.a 00:02:38.915 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:38.915 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:38.915 [6/268] Linking static target lib/librte_log.a 00:02:39.172 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:39.172 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:39.430 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:39.430 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:39.431 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:39.431 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.431 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:39.431 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:39.431 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:39.431 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:39.736 [17/268] Linking static target lib/librte_telemetry.a 00:02:39.736 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:40.015 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:40.015 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.015 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:40.015 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:40.015 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:40.015 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:40.015 [25/268] Linking target lib/librte_log.so.24.1 00:02:40.015 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:40.015 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:40.274 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.274 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:40.274 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:40.274 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:40.274 [32/268] Linking target lib/librte_kvargs.so.24.1 00:02:40.274 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:40.274 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:40.533 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:40.533 [36/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.533 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:40.533 [38/268] Linking target lib/librte_telemetry.so.24.1 00:02:40.533 [39/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:40.533 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:40.533 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:40.533 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:40.792 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:40.792 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:40.792 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:40.792 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:41.050 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:41.050 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:41.050 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:41.050 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:41.050 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:41.309 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:41.309 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:41.309 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:41.309 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:41.567 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:41.567 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:41.567 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:41.567 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:41.567 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:41.567 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:41.826 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:41.826 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:41.826 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:41.826 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:41.826 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.088 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.088 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:42.088 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:42.088 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.347 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:42.347 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:42.347 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:42.347 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:42.347 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:42.347 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.606 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:42.606 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:42.606 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:42.606 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:42.606 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:42.865 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:42.865 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:42.865 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:42.865 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:42.865 [86/268] Linking static target lib/librte_ring.a 00:02:42.865 [87/268] Linking static target lib/librte_eal.a 00:02:43.124 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:43.124 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:43.382 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:43.382 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:43.382 [92/268] Linking static target lib/librte_rcu.a 00:02:43.382 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:43.382 [94/268] Linking static target lib/librte_mempool.a 00:02:43.382 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:43.382 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.641 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:43.900 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:43.900 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:43.900 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:43.900 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.158 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.158 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.158 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:44.158 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.158 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.158 [107/268] Linking static target lib/librte_mbuf.a 00:02:44.416 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:44.416 [109/268] Linking static target lib/librte_net.a 00:02:44.416 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:44.416 [111/268] Linking static target lib/librte_meter.a 00:02:44.674 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:44.675 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.934 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.934 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.934 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:44.934 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.934 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.192 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:45.451 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.451 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:45.451 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:45.709 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:45.709 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:45.709 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:45.709 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:45.967 [127/268] Linking static target lib/librte_pci.a 00:02:45.967 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:45.968 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:45.968 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:45.968 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:45.968 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:45.968 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:45.968 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:46.226 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:46.226 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:46.226 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:46.226 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.226 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:46.226 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:46.226 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:46.226 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:46.226 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:46.226 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:46.485 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:46.485 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:46.485 [147/268] Linking static target lib/librte_cmdline.a 00:02:46.485 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:46.743 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:46.743 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:46.743 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:47.002 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:47.002 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:47.002 [154/268] Linking static target lib/librte_timer.a 00:02:47.002 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:47.261 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.261 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.261 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:47.261 [159/268] Linking static target lib/librte_compressdev.a 00:02:47.261 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:47.519 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.519 [162/268] Linking static target lib/librte_hash.a 00:02:47.519 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:47.519 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:47.519 [165/268] Linking static target lib/librte_dmadev.a 00:02:47.519 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.778 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:47.778 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:47.778 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:47.778 [170/268] Linking static target lib/librte_ethdev.a 00:02:47.778 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:48.037 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:48.037 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:48.037 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.296 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.296 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:48.296 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.555 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:48.555 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:48.555 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:48.555 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:48.555 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.555 [183/268] Linking static target lib/librte_cryptodev.a 00:02:48.555 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.813 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:48.813 [186/268] Linking static target lib/librte_reorder.a 00:02:48.813 [187/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:48.813 [188/268] Linking static target lib/librte_power.a 00:02:49.073 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:49.073 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:49.073 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:49.073 [192/268] Linking static target lib/librte_security.a 00:02:49.073 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:49.641 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.641 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:49.899 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.899 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:50.158 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:50.158 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:50.158 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:50.158 [201/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.417 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:50.417 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:50.417 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:50.417 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:50.676 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:50.676 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:50.676 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:50.676 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:50.676 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:50.936 [211/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:50.936 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:50.936 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.936 [214/268] Linking static target drivers/librte_bus_vdev.a 00:02:50.936 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.936 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.936 [217/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.936 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.936 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:50.936 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:50.936 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:51.195 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.195 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:51.195 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.195 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.195 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:51.454 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.833 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:54.738 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.738 [230/268] Linking target lib/librte_eal.so.24.1 00:02:54.738 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:54.738 [232/268] Linking target lib/librte_meter.so.24.1 00:02:54.738 [233/268] Linking target lib/librte_timer.so.24.1 00:02:54.738 [234/268] Linking target lib/librte_pci.so.24.1 00:02:54.738 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:54.738 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:54.738 [237/268] Linking target lib/librte_ring.so.24.1 00:02:54.738 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:54.738 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:54.738 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:54.738 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:54.738 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:54.998 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:54.998 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:54.998 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:54.998 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:54.998 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:55.256 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:55.256 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:55.256 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:55.257 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:55.257 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:02:55.257 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:55.257 [254/268] Linking target lib/librte_net.so.24.1 00:02:55.515 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:55.515 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:55.515 [257/268] Linking target lib/librte_security.so.24.1 00:02:55.515 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:55.515 [259/268] Linking target lib/librte_hash.so.24.1 00:02:55.515 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.773 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:55.773 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:55.773 [263/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:55.773 [264/268] Linking static target lib/librte_vhost.a 00:02:55.773 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:56.032 [266/268] Linking target lib/librte_power.so.24.1 00:02:57.937 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.937 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:57.937 INFO: autodetecting backend as ninja 00:02:57.937 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:59.316 CC lib/ut/ut.o 00:02:59.316 CC lib/ut_mock/mock.o 00:02:59.316 CC lib/log/log.o 00:02:59.316 CC lib/log/log_flags.o 00:02:59.316 CC lib/log/log_deprecated.o 00:02:59.575 LIB libspdk_ut_mock.a 00:02:59.575 LIB libspdk_log.a 00:02:59.575 LIB libspdk_ut.a 00:02:59.575 SO libspdk_ut_mock.so.6.0 00:02:59.575 SO libspdk_ut.so.2.0 00:02:59.575 SO libspdk_log.so.7.0 00:02:59.575 SYMLINK libspdk_ut_mock.so 00:02:59.575 SYMLINK libspdk_ut.so 00:02:59.575 SYMLINK libspdk_log.so 00:02:59.848 CXX lib/trace_parser/trace.o 00:02:59.848 CC lib/ioat/ioat.o 00:02:59.848 CC lib/dma/dma.o 00:02:59.848 CC lib/util/bit_array.o 00:02:59.848 CC lib/util/base64.o 00:02:59.848 CC lib/util/cpuset.o 00:02:59.848 CC lib/util/crc16.o 00:02:59.848 CC lib/util/crc32c.o 00:02:59.848 CC lib/util/crc32.o 00:03:00.117 CC lib/util/crc32_ieee.o 00:03:00.117 CC lib/vfio_user/host/vfio_user_pci.o 00:03:00.117 CC lib/vfio_user/host/vfio_user.o 00:03:00.117 CC lib/util/crc64.o 00:03:00.117 CC lib/util/dif.o 00:03:00.117 CC lib/util/fd.o 00:03:00.117 LIB libspdk_dma.a 00:03:00.117 CC lib/util/fd_group.o 00:03:00.117 CC lib/util/file.o 00:03:00.117 SO libspdk_dma.so.4.0 00:03:00.117 CC lib/util/hexlify.o 00:03:00.376 LIB libspdk_ioat.a 00:03:00.376 SYMLINK libspdk_dma.so 00:03:00.377 CC lib/util/iov.o 00:03:00.377 CC lib/util/math.o 00:03:00.377 CC lib/util/net.o 00:03:00.377 SO libspdk_ioat.so.7.0 00:03:00.377 LIB libspdk_vfio_user.a 00:03:00.377 CC lib/util/pipe.o 00:03:00.377 SO libspdk_vfio_user.so.5.0 00:03:00.377 SYMLINK libspdk_ioat.so 00:03:00.377 CC lib/util/strerror_tls.o 00:03:00.377 CC lib/util/string.o 00:03:00.377 CC lib/util/uuid.o 00:03:00.377 SYMLINK libspdk_vfio_user.so 00:03:00.377 CC lib/util/xor.o 00:03:00.377 CC lib/util/zipf.o 00:03:00.635 LIB libspdk_util.a 00:03:00.894 SO libspdk_util.so.10.0 00:03:00.894 LIB libspdk_trace_parser.a 00:03:00.894 SO libspdk_trace_parser.so.5.0 00:03:00.894 SYMLINK libspdk_util.so 00:03:01.153 SYMLINK libspdk_trace_parser.so 00:03:01.153 CC lib/conf/conf.o 00:03:01.153 CC lib/rdma_utils/rdma_utils.o 00:03:01.153 CC lib/vmd/vmd.o 00:03:01.153 CC lib/env_dpdk/env.o 00:03:01.153 CC lib/vmd/led.o 00:03:01.153 CC lib/idxd/idxd.o 00:03:01.153 CC lib/env_dpdk/memory.o 00:03:01.153 CC lib/json/json_parse.o 00:03:01.153 CC lib/idxd/idxd_user.o 00:03:01.153 CC lib/rdma_provider/common.o 00:03:01.413 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:01.413 CC lib/json/json_util.o 00:03:01.413 LIB libspdk_conf.a 00:03:01.413 CC lib/json/json_write.o 00:03:01.413 CC lib/env_dpdk/pci.o 00:03:01.413 SO libspdk_conf.so.6.0 00:03:01.413 LIB libspdk_rdma_utils.a 00:03:01.413 SO libspdk_rdma_utils.so.1.0 00:03:01.413 SYMLINK libspdk_conf.so 00:03:01.413 LIB libspdk_rdma_provider.a 00:03:01.413 CC lib/idxd/idxd_kernel.o 00:03:01.413 SYMLINK libspdk_rdma_utils.so 00:03:01.413 CC lib/env_dpdk/init.o 00:03:01.413 SO libspdk_rdma_provider.so.6.0 00:03:01.672 SYMLINK libspdk_rdma_provider.so 00:03:01.672 CC lib/env_dpdk/threads.o 00:03:01.672 CC lib/env_dpdk/pci_ioat.o 00:03:01.672 CC lib/env_dpdk/pci_virtio.o 00:03:01.672 LIB libspdk_json.a 00:03:01.672 SO libspdk_json.so.6.0 00:03:01.672 CC lib/env_dpdk/pci_vmd.o 00:03:01.672 CC lib/env_dpdk/pci_idxd.o 00:03:01.672 CC lib/env_dpdk/pci_event.o 00:03:01.672 CC lib/env_dpdk/sigbus_handler.o 00:03:01.672 LIB libspdk_idxd.a 00:03:01.672 SYMLINK libspdk_json.so 00:03:01.931 SO libspdk_idxd.so.12.0 00:03:01.931 CC lib/env_dpdk/pci_dpdk.o 00:03:01.931 LIB libspdk_vmd.a 00:03:01.931 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:01.931 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:01.931 SO libspdk_vmd.so.6.0 00:03:01.931 SYMLINK libspdk_idxd.so 00:03:01.931 SYMLINK libspdk_vmd.so 00:03:01.931 CC lib/jsonrpc/jsonrpc_client.o 00:03:01.931 CC lib/jsonrpc/jsonrpc_server.o 00:03:01.931 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:01.931 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:02.190 LIB libspdk_jsonrpc.a 00:03:02.190 SO libspdk_jsonrpc.so.6.0 00:03:02.449 SYMLINK libspdk_jsonrpc.so 00:03:02.708 CC lib/rpc/rpc.o 00:03:02.708 LIB libspdk_env_dpdk.a 00:03:02.708 SO libspdk_env_dpdk.so.15.0 00:03:02.967 LIB libspdk_rpc.a 00:03:02.967 SO libspdk_rpc.so.6.0 00:03:02.967 SYMLINK libspdk_env_dpdk.so 00:03:02.967 SYMLINK libspdk_rpc.so 00:03:03.226 CC lib/trace/trace.o 00:03:03.226 CC lib/trace/trace_flags.o 00:03:03.226 CC lib/notify/notify.o 00:03:03.226 CC lib/trace/trace_rpc.o 00:03:03.226 CC lib/notify/notify_rpc.o 00:03:03.226 CC lib/keyring/keyring.o 00:03:03.226 CC lib/keyring/keyring_rpc.o 00:03:03.485 LIB libspdk_notify.a 00:03:03.485 SO libspdk_notify.so.6.0 00:03:03.485 LIB libspdk_trace.a 00:03:03.485 SYMLINK libspdk_notify.so 00:03:03.485 LIB libspdk_keyring.a 00:03:03.485 SO libspdk_trace.so.10.0 00:03:03.485 SO libspdk_keyring.so.1.0 00:03:03.744 SYMLINK libspdk_trace.so 00:03:03.744 SYMLINK libspdk_keyring.so 00:03:04.003 CC lib/sock/sock_rpc.o 00:03:04.003 CC lib/sock/sock.o 00:03:04.003 CC lib/thread/thread.o 00:03:04.003 CC lib/thread/iobuf.o 00:03:04.262 LIB libspdk_sock.a 00:03:04.522 SO libspdk_sock.so.10.0 00:03:04.522 SYMLINK libspdk_sock.so 00:03:04.780 CC lib/nvme/nvme_ctrlr.o 00:03:04.780 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:04.780 CC lib/nvme/nvme_fabric.o 00:03:04.780 CC lib/nvme/nvme_ns_cmd.o 00:03:04.780 CC lib/nvme/nvme_pcie_common.o 00:03:04.780 CC lib/nvme/nvme_ns.o 00:03:04.780 CC lib/nvme/nvme_pcie.o 00:03:04.780 CC lib/nvme/nvme_qpair.o 00:03:04.780 CC lib/nvme/nvme.o 00:03:05.716 CC lib/nvme/nvme_quirks.o 00:03:05.716 CC lib/nvme/nvme_transport.o 00:03:05.716 CC lib/nvme/nvme_discovery.o 00:03:05.716 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:05.716 LIB libspdk_thread.a 00:03:05.717 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:05.717 SO libspdk_thread.so.10.1 00:03:05.717 SYMLINK libspdk_thread.so 00:03:05.717 CC lib/nvme/nvme_tcp.o 00:03:05.975 CC lib/nvme/nvme_opal.o 00:03:05.975 CC lib/accel/accel.o 00:03:06.234 CC lib/accel/accel_rpc.o 00:03:06.234 CC lib/blob/blobstore.o 00:03:06.234 CC lib/nvme/nvme_io_msg.o 00:03:06.234 CC lib/init/json_config.o 00:03:06.234 CC lib/virtio/virtio.o 00:03:06.234 CC lib/virtio/virtio_vhost_user.o 00:03:06.234 CC lib/virtio/virtio_vfio_user.o 00:03:06.493 CC lib/blob/request.o 00:03:06.493 CC lib/init/subsystem.o 00:03:06.493 CC lib/init/subsystem_rpc.o 00:03:06.493 CC lib/init/rpc.o 00:03:06.752 CC lib/virtio/virtio_pci.o 00:03:06.752 CC lib/blob/zeroes.o 00:03:06.752 CC lib/blob/blob_bs_dev.o 00:03:06.752 CC lib/accel/accel_sw.o 00:03:06.752 CC lib/nvme/nvme_poll_group.o 00:03:06.752 LIB libspdk_init.a 00:03:06.752 SO libspdk_init.so.5.0 00:03:06.752 CC lib/nvme/nvme_zns.o 00:03:06.752 SYMLINK libspdk_init.so 00:03:07.011 CC lib/nvme/nvme_stubs.o 00:03:07.011 LIB libspdk_virtio.a 00:03:07.011 CC lib/nvme/nvme_auth.o 00:03:07.011 SO libspdk_virtio.so.7.0 00:03:07.011 LIB libspdk_accel.a 00:03:07.011 SO libspdk_accel.so.16.0 00:03:07.011 SYMLINK libspdk_virtio.so 00:03:07.270 CC lib/nvme/nvme_cuse.o 00:03:07.270 SYMLINK libspdk_accel.so 00:03:07.270 CC lib/nvme/nvme_rdma.o 00:03:07.270 CC lib/event/app.o 00:03:07.270 CC lib/event/reactor.o 00:03:07.270 CC lib/bdev/bdev.o 00:03:07.270 CC lib/event/log_rpc.o 00:03:07.270 CC lib/event/app_rpc.o 00:03:07.270 CC lib/bdev/bdev_rpc.o 00:03:07.529 CC lib/event/scheduler_static.o 00:03:07.529 CC lib/bdev/bdev_zone.o 00:03:07.529 CC lib/bdev/part.o 00:03:07.529 CC lib/bdev/scsi_nvme.o 00:03:07.788 LIB libspdk_event.a 00:03:07.788 SO libspdk_event.so.14.0 00:03:07.788 SYMLINK libspdk_event.so 00:03:08.725 LIB libspdk_nvme.a 00:03:09.005 SO libspdk_nvme.so.13.1 00:03:09.263 SYMLINK libspdk_nvme.so 00:03:09.522 LIB libspdk_blob.a 00:03:09.522 SO libspdk_blob.so.11.0 00:03:09.780 SYMLINK libspdk_blob.so 00:03:10.039 CC lib/blobfs/tree.o 00:03:10.039 CC lib/blobfs/blobfs.o 00:03:10.039 CC lib/lvol/lvol.o 00:03:10.039 LIB libspdk_bdev.a 00:03:10.039 SO libspdk_bdev.so.16.0 00:03:10.298 SYMLINK libspdk_bdev.so 00:03:10.298 CC lib/scsi/dev.o 00:03:10.298 CC lib/scsi/lun.o 00:03:10.298 CC lib/scsi/port.o 00:03:10.557 CC lib/scsi/scsi.o 00:03:10.557 CC lib/nbd/nbd.o 00:03:10.557 CC lib/nvmf/ctrlr.o 00:03:10.557 CC lib/ftl/ftl_core.o 00:03:10.557 CC lib/ublk/ublk.o 00:03:10.557 CC lib/nvmf/ctrlr_discovery.o 00:03:10.557 CC lib/nvmf/ctrlr_bdev.o 00:03:10.816 CC lib/nbd/nbd_rpc.o 00:03:10.816 CC lib/scsi/scsi_bdev.o 00:03:10.816 CC lib/scsi/scsi_pr.o 00:03:10.816 LIB libspdk_nbd.a 00:03:10.816 CC lib/ftl/ftl_init.o 00:03:10.816 SO libspdk_nbd.so.7.0 00:03:10.816 LIB libspdk_blobfs.a 00:03:11.076 SYMLINK libspdk_nbd.so 00:03:11.076 CC lib/ftl/ftl_layout.o 00:03:11.076 SO libspdk_blobfs.so.10.0 00:03:11.076 SYMLINK libspdk_blobfs.so 00:03:11.076 CC lib/ublk/ublk_rpc.o 00:03:11.076 CC lib/scsi/scsi_rpc.o 00:03:11.076 LIB libspdk_lvol.a 00:03:11.076 CC lib/nvmf/subsystem.o 00:03:11.076 SO libspdk_lvol.so.10.0 00:03:11.076 CC lib/nvmf/nvmf.o 00:03:11.076 SYMLINK libspdk_lvol.so 00:03:11.076 CC lib/nvmf/nvmf_rpc.o 00:03:11.076 CC lib/nvmf/transport.o 00:03:11.335 CC lib/nvmf/tcp.o 00:03:11.335 LIB libspdk_ublk.a 00:03:11.335 CC lib/scsi/task.o 00:03:11.335 SO libspdk_ublk.so.3.0 00:03:11.335 CC lib/ftl/ftl_debug.o 00:03:11.335 SYMLINK libspdk_ublk.so 00:03:11.335 CC lib/nvmf/stubs.o 00:03:11.335 CC lib/nvmf/mdns_server.o 00:03:11.335 LIB libspdk_scsi.a 00:03:11.594 CC lib/ftl/ftl_io.o 00:03:11.594 SO libspdk_scsi.so.9.0 00:03:11.594 SYMLINK libspdk_scsi.so 00:03:11.594 CC lib/nvmf/rdma.o 00:03:11.854 CC lib/ftl/ftl_sb.o 00:03:11.854 CC lib/ftl/ftl_l2p.o 00:03:11.854 CC lib/ftl/ftl_l2p_flat.o 00:03:11.854 CC lib/nvmf/auth.o 00:03:12.113 CC lib/ftl/ftl_nv_cache.o 00:03:12.113 CC lib/ftl/ftl_band.o 00:03:12.113 CC lib/ftl/ftl_band_ops.o 00:03:12.113 CC lib/ftl/ftl_writer.o 00:03:12.113 CC lib/iscsi/conn.o 00:03:12.372 CC lib/ftl/ftl_rq.o 00:03:12.372 CC lib/ftl/ftl_reloc.o 00:03:12.372 CC lib/ftl/ftl_l2p_cache.o 00:03:12.372 CC lib/ftl/ftl_p2l.o 00:03:12.372 CC lib/vhost/vhost.o 00:03:12.631 CC lib/ftl/mngt/ftl_mngt.o 00:03:12.631 CC lib/iscsi/init_grp.o 00:03:12.631 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:12.631 CC lib/vhost/vhost_rpc.o 00:03:12.890 CC lib/iscsi/iscsi.o 00:03:12.890 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:12.890 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:12.890 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:12.890 CC lib/vhost/vhost_scsi.o 00:03:12.890 CC lib/iscsi/md5.o 00:03:12.890 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:12.890 CC lib/iscsi/param.o 00:03:13.149 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:13.149 CC lib/iscsi/portal_grp.o 00:03:13.149 CC lib/iscsi/tgt_node.o 00:03:13.149 CC lib/vhost/vhost_blk.o 00:03:13.149 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:13.149 CC lib/iscsi/iscsi_subsystem.o 00:03:13.408 CC lib/iscsi/iscsi_rpc.o 00:03:13.408 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:13.408 CC lib/vhost/rte_vhost_user.o 00:03:13.408 CC lib/iscsi/task.o 00:03:13.667 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:13.667 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:13.667 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:13.667 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:13.667 CC lib/ftl/utils/ftl_conf.o 00:03:13.926 CC lib/ftl/utils/ftl_md.o 00:03:13.926 CC lib/ftl/utils/ftl_mempool.o 00:03:13.926 CC lib/ftl/utils/ftl_bitmap.o 00:03:13.926 CC lib/ftl/utils/ftl_property.o 00:03:13.926 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:13.926 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:13.926 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:14.185 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:14.185 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:14.185 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:14.185 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:14.185 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:14.185 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:14.185 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:14.185 LIB libspdk_iscsi.a 00:03:14.185 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:14.185 CC lib/ftl/base/ftl_base_dev.o 00:03:14.185 CC lib/ftl/base/ftl_base_bdev.o 00:03:14.444 LIB libspdk_nvmf.a 00:03:14.444 SO libspdk_iscsi.so.8.0 00:03:14.444 CC lib/ftl/ftl_trace.o 00:03:14.444 SO libspdk_nvmf.so.19.0 00:03:14.444 LIB libspdk_vhost.a 00:03:14.444 SYMLINK libspdk_iscsi.so 00:03:14.444 SO libspdk_vhost.so.8.0 00:03:14.703 LIB libspdk_ftl.a 00:03:14.703 SYMLINK libspdk_vhost.so 00:03:14.703 SYMLINK libspdk_nvmf.so 00:03:14.963 SO libspdk_ftl.so.9.0 00:03:15.222 SYMLINK libspdk_ftl.so 00:03:15.480 CC module/env_dpdk/env_dpdk_rpc.o 00:03:15.481 CC module/accel/dsa/accel_dsa.o 00:03:15.481 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:15.481 CC module/sock/posix/posix.o 00:03:15.739 CC module/accel/error/accel_error.o 00:03:15.739 CC module/accel/ioat/accel_ioat.o 00:03:15.739 CC module/accel/iaa/accel_iaa.o 00:03:15.739 CC module/blob/bdev/blob_bdev.o 00:03:15.739 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:15.739 CC module/keyring/file/keyring.o 00:03:15.739 LIB libspdk_env_dpdk_rpc.a 00:03:15.739 SO libspdk_env_dpdk_rpc.so.6.0 00:03:15.739 SYMLINK libspdk_env_dpdk_rpc.so 00:03:15.739 CC module/accel/ioat/accel_ioat_rpc.o 00:03:15.739 LIB libspdk_scheduler_dpdk_governor.a 00:03:15.739 CC module/keyring/file/keyring_rpc.o 00:03:15.739 CC module/accel/error/accel_error_rpc.o 00:03:15.739 LIB libspdk_scheduler_dynamic.a 00:03:15.739 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:15.739 CC module/accel/iaa/accel_iaa_rpc.o 00:03:15.739 SO libspdk_scheduler_dynamic.so.4.0 00:03:15.739 CC module/accel/dsa/accel_dsa_rpc.o 00:03:15.739 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:15.998 LIB libspdk_blob_bdev.a 00:03:15.998 SYMLINK libspdk_scheduler_dynamic.so 00:03:15.998 LIB libspdk_accel_ioat.a 00:03:15.998 SO libspdk_blob_bdev.so.11.0 00:03:15.998 LIB libspdk_accel_error.a 00:03:15.998 LIB libspdk_keyring_file.a 00:03:15.998 SO libspdk_accel_ioat.so.6.0 00:03:15.998 SO libspdk_accel_error.so.2.0 00:03:15.998 LIB libspdk_accel_iaa.a 00:03:15.998 SO libspdk_keyring_file.so.1.0 00:03:15.998 CC module/keyring/linux/keyring.o 00:03:15.998 SO libspdk_accel_iaa.so.3.0 00:03:15.998 LIB libspdk_accel_dsa.a 00:03:15.998 SYMLINK libspdk_blob_bdev.so 00:03:15.998 SYMLINK libspdk_accel_ioat.so 00:03:15.998 CC module/keyring/linux/keyring_rpc.o 00:03:15.998 SYMLINK libspdk_accel_error.so 00:03:15.998 SO libspdk_accel_dsa.so.5.0 00:03:15.998 SYMLINK libspdk_keyring_file.so 00:03:15.998 SYMLINK libspdk_accel_iaa.so 00:03:15.998 CC module/sock/uring/uring.o 00:03:15.998 CC module/scheduler/gscheduler/gscheduler.o 00:03:15.998 SYMLINK libspdk_accel_dsa.so 00:03:16.256 LIB libspdk_keyring_linux.a 00:03:16.256 SO libspdk_keyring_linux.so.1.0 00:03:16.256 LIB libspdk_scheduler_gscheduler.a 00:03:16.256 SYMLINK libspdk_keyring_linux.so 00:03:16.256 CC module/bdev/lvol/vbdev_lvol.o 00:03:16.256 CC module/bdev/malloc/bdev_malloc.o 00:03:16.256 CC module/bdev/error/vbdev_error.o 00:03:16.256 SO libspdk_scheduler_gscheduler.so.4.0 00:03:16.256 CC module/bdev/delay/vbdev_delay.o 00:03:16.257 CC module/blobfs/bdev/blobfs_bdev.o 00:03:16.257 CC module/bdev/gpt/gpt.o 00:03:16.257 SYMLINK libspdk_scheduler_gscheduler.so 00:03:16.257 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:16.257 LIB libspdk_sock_posix.a 00:03:16.515 SO libspdk_sock_posix.so.6.0 00:03:16.515 CC module/bdev/null/bdev_null.o 00:03:16.515 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:16.515 CC module/bdev/gpt/vbdev_gpt.o 00:03:16.515 SYMLINK libspdk_sock_posix.so 00:03:16.515 CC module/bdev/null/bdev_null_rpc.o 00:03:16.515 CC module/bdev/error/vbdev_error_rpc.o 00:03:16.515 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:16.515 LIB libspdk_blobfs_bdev.a 00:03:16.773 SO libspdk_blobfs_bdev.so.6.0 00:03:16.773 LIB libspdk_bdev_delay.a 00:03:16.773 LIB libspdk_bdev_error.a 00:03:16.773 SO libspdk_bdev_delay.so.6.0 00:03:16.773 SO libspdk_bdev_error.so.6.0 00:03:16.773 SYMLINK libspdk_blobfs_bdev.so 00:03:16.773 LIB libspdk_bdev_malloc.a 00:03:16.773 LIB libspdk_bdev_null.a 00:03:16.773 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:16.773 LIB libspdk_bdev_gpt.a 00:03:16.773 SO libspdk_bdev_malloc.so.6.0 00:03:16.773 SO libspdk_bdev_null.so.6.0 00:03:16.773 SYMLINK libspdk_bdev_delay.so 00:03:16.773 SYMLINK libspdk_bdev_error.so 00:03:16.773 SO libspdk_bdev_gpt.so.6.0 00:03:16.773 SYMLINK libspdk_bdev_malloc.so 00:03:16.773 LIB libspdk_sock_uring.a 00:03:16.773 SYMLINK libspdk_bdev_null.so 00:03:16.773 CC module/bdev/nvme/bdev_nvme.o 00:03:16.773 SYMLINK libspdk_bdev_gpt.so 00:03:16.773 CC module/bdev/passthru/vbdev_passthru.o 00:03:16.773 SO libspdk_sock_uring.so.5.0 00:03:17.032 SYMLINK libspdk_sock_uring.so 00:03:17.032 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:17.032 CC module/bdev/raid/bdev_raid.o 00:03:17.032 CC module/bdev/split/vbdev_split.o 00:03:17.032 CC module/bdev/uring/bdev_uring.o 00:03:17.032 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:17.032 CC module/bdev/aio/bdev_aio.o 00:03:17.032 CC module/bdev/ftl/bdev_ftl.o 00:03:17.032 LIB libspdk_bdev_lvol.a 00:03:17.032 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:17.032 SO libspdk_bdev_lvol.so.6.0 00:03:17.032 LIB libspdk_bdev_passthru.a 00:03:17.291 SO libspdk_bdev_passthru.so.6.0 00:03:17.291 CC module/bdev/split/vbdev_split_rpc.o 00:03:17.291 SYMLINK libspdk_bdev_lvol.so 00:03:17.291 SYMLINK libspdk_bdev_passthru.so 00:03:17.291 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:17.291 CC module/bdev/raid/bdev_raid_rpc.o 00:03:17.291 LIB libspdk_bdev_ftl.a 00:03:17.291 CC module/bdev/aio/bdev_aio_rpc.o 00:03:17.291 CC module/bdev/uring/bdev_uring_rpc.o 00:03:17.291 SO libspdk_bdev_ftl.so.6.0 00:03:17.291 LIB libspdk_bdev_split.a 00:03:17.291 CC module/bdev/iscsi/bdev_iscsi.o 00:03:17.291 SO libspdk_bdev_split.so.6.0 00:03:17.291 LIB libspdk_bdev_zone_block.a 00:03:17.549 SO libspdk_bdev_zone_block.so.6.0 00:03:17.549 SYMLINK libspdk_bdev_ftl.so 00:03:17.549 CC module/bdev/raid/bdev_raid_sb.o 00:03:17.549 SYMLINK libspdk_bdev_split.so 00:03:17.549 CC module/bdev/raid/raid0.o 00:03:17.549 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:17.549 LIB libspdk_bdev_aio.a 00:03:17.549 SYMLINK libspdk_bdev_zone_block.so 00:03:17.549 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:17.549 CC module/bdev/raid/raid1.o 00:03:17.549 LIB libspdk_bdev_uring.a 00:03:17.549 SO libspdk_bdev_aio.so.6.0 00:03:17.549 SO libspdk_bdev_uring.so.6.0 00:03:17.549 SYMLINK libspdk_bdev_aio.so 00:03:17.549 CC module/bdev/raid/concat.o 00:03:17.549 SYMLINK libspdk_bdev_uring.so 00:03:17.549 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:17.808 CC module/bdev/nvme/nvme_rpc.o 00:03:17.808 CC module/bdev/nvme/bdev_mdns_client.o 00:03:17.808 CC module/bdev/nvme/vbdev_opal.o 00:03:17.808 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:17.808 LIB libspdk_bdev_iscsi.a 00:03:17.808 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:17.808 SO libspdk_bdev_iscsi.so.6.0 00:03:17.808 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:17.808 SYMLINK libspdk_bdev_iscsi.so 00:03:17.808 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:18.067 LIB libspdk_bdev_raid.a 00:03:18.067 LIB libspdk_bdev_virtio.a 00:03:18.067 SO libspdk_bdev_raid.so.6.0 00:03:18.067 SO libspdk_bdev_virtio.so.6.0 00:03:18.067 SYMLINK libspdk_bdev_virtio.so 00:03:18.067 SYMLINK libspdk_bdev_raid.so 00:03:19.030 LIB libspdk_bdev_nvme.a 00:03:19.288 SO libspdk_bdev_nvme.so.7.0 00:03:19.288 SYMLINK libspdk_bdev_nvme.so 00:03:19.856 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:19.856 CC module/event/subsystems/sock/sock.o 00:03:19.856 CC module/event/subsystems/keyring/keyring.o 00:03:19.856 CC module/event/subsystems/iobuf/iobuf.o 00:03:19.856 CC module/event/subsystems/vmd/vmd.o 00:03:19.856 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:19.856 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:19.856 CC module/event/subsystems/scheduler/scheduler.o 00:03:19.856 LIB libspdk_event_vhost_blk.a 00:03:20.115 SO libspdk_event_vhost_blk.so.3.0 00:03:20.115 LIB libspdk_event_sock.a 00:03:20.115 LIB libspdk_event_keyring.a 00:03:20.115 LIB libspdk_event_vmd.a 00:03:20.115 LIB libspdk_event_scheduler.a 00:03:20.115 LIB libspdk_event_iobuf.a 00:03:20.115 SO libspdk_event_sock.so.5.0 00:03:20.115 SO libspdk_event_keyring.so.1.0 00:03:20.115 SO libspdk_event_scheduler.so.4.0 00:03:20.115 SO libspdk_event_vmd.so.6.0 00:03:20.115 SYMLINK libspdk_event_vhost_blk.so 00:03:20.115 SO libspdk_event_iobuf.so.3.0 00:03:20.115 SYMLINK libspdk_event_sock.so 00:03:20.115 SYMLINK libspdk_event_keyring.so 00:03:20.115 SYMLINK libspdk_event_scheduler.so 00:03:20.115 SYMLINK libspdk_event_vmd.so 00:03:20.115 SYMLINK libspdk_event_iobuf.so 00:03:20.374 CC module/event/subsystems/accel/accel.o 00:03:20.632 LIB libspdk_event_accel.a 00:03:20.632 SO libspdk_event_accel.so.6.0 00:03:20.891 SYMLINK libspdk_event_accel.so 00:03:21.150 CC module/event/subsystems/bdev/bdev.o 00:03:21.410 LIB libspdk_event_bdev.a 00:03:21.410 SO libspdk_event_bdev.so.6.0 00:03:21.410 SYMLINK libspdk_event_bdev.so 00:03:21.668 CC module/event/subsystems/ublk/ublk.o 00:03:21.668 CC module/event/subsystems/nbd/nbd.o 00:03:21.668 CC module/event/subsystems/scsi/scsi.o 00:03:21.668 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:21.668 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:21.927 LIB libspdk_event_nbd.a 00:03:21.927 LIB libspdk_event_ublk.a 00:03:21.927 LIB libspdk_event_scsi.a 00:03:21.927 SO libspdk_event_ublk.so.3.0 00:03:21.927 SO libspdk_event_nbd.so.6.0 00:03:21.927 SO libspdk_event_scsi.so.6.0 00:03:21.927 SYMLINK libspdk_event_ublk.so 00:03:21.927 SYMLINK libspdk_event_nbd.so 00:03:21.927 LIB libspdk_event_nvmf.a 00:03:22.186 SYMLINK libspdk_event_scsi.so 00:03:22.186 SO libspdk_event_nvmf.so.6.0 00:03:22.186 SYMLINK libspdk_event_nvmf.so 00:03:22.445 CC module/event/subsystems/iscsi/iscsi.o 00:03:22.445 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:22.445 LIB libspdk_event_vhost_scsi.a 00:03:22.445 LIB libspdk_event_iscsi.a 00:03:22.703 SO libspdk_event_vhost_scsi.so.3.0 00:03:22.703 SO libspdk_event_iscsi.so.6.0 00:03:22.703 SYMLINK libspdk_event_vhost_scsi.so 00:03:22.703 SYMLINK libspdk_event_iscsi.so 00:03:22.961 SO libspdk.so.6.0 00:03:22.961 SYMLINK libspdk.so 00:03:23.220 CXX app/trace/trace.o 00:03:23.220 CC app/trace_record/trace_record.o 00:03:23.220 TEST_HEADER include/spdk/accel.h 00:03:23.220 TEST_HEADER include/spdk/accel_module.h 00:03:23.220 TEST_HEADER include/spdk/assert.h 00:03:23.220 TEST_HEADER include/spdk/barrier.h 00:03:23.220 TEST_HEADER include/spdk/base64.h 00:03:23.220 TEST_HEADER include/spdk/bdev.h 00:03:23.220 TEST_HEADER include/spdk/bdev_module.h 00:03:23.220 TEST_HEADER include/spdk/bdev_zone.h 00:03:23.220 TEST_HEADER include/spdk/bit_array.h 00:03:23.220 TEST_HEADER include/spdk/bit_pool.h 00:03:23.220 TEST_HEADER include/spdk/blob_bdev.h 00:03:23.220 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:23.220 TEST_HEADER include/spdk/blobfs.h 00:03:23.220 TEST_HEADER include/spdk/blob.h 00:03:23.220 TEST_HEADER include/spdk/conf.h 00:03:23.220 TEST_HEADER include/spdk/config.h 00:03:23.220 TEST_HEADER include/spdk/cpuset.h 00:03:23.220 TEST_HEADER include/spdk/crc16.h 00:03:23.220 CC app/nvmf_tgt/nvmf_main.o 00:03:23.220 TEST_HEADER include/spdk/crc32.h 00:03:23.220 TEST_HEADER include/spdk/crc64.h 00:03:23.220 CC app/iscsi_tgt/iscsi_tgt.o 00:03:23.220 TEST_HEADER include/spdk/dif.h 00:03:23.220 TEST_HEADER include/spdk/dma.h 00:03:23.220 TEST_HEADER include/spdk/endian.h 00:03:23.220 TEST_HEADER include/spdk/env_dpdk.h 00:03:23.220 TEST_HEADER include/spdk/env.h 00:03:23.220 TEST_HEADER include/spdk/event.h 00:03:23.220 TEST_HEADER include/spdk/fd_group.h 00:03:23.220 TEST_HEADER include/spdk/fd.h 00:03:23.220 TEST_HEADER include/spdk/file.h 00:03:23.220 TEST_HEADER include/spdk/ftl.h 00:03:23.220 TEST_HEADER include/spdk/gpt_spec.h 00:03:23.220 TEST_HEADER include/spdk/hexlify.h 00:03:23.220 TEST_HEADER include/spdk/histogram_data.h 00:03:23.220 TEST_HEADER include/spdk/idxd.h 00:03:23.220 CC test/thread/poller_perf/poller_perf.o 00:03:23.220 TEST_HEADER include/spdk/idxd_spec.h 00:03:23.220 TEST_HEADER include/spdk/init.h 00:03:23.220 TEST_HEADER include/spdk/ioat.h 00:03:23.220 TEST_HEADER include/spdk/ioat_spec.h 00:03:23.220 TEST_HEADER include/spdk/iscsi_spec.h 00:03:23.220 TEST_HEADER include/spdk/json.h 00:03:23.220 CC examples/util/zipf/zipf.o 00:03:23.220 TEST_HEADER include/spdk/jsonrpc.h 00:03:23.220 TEST_HEADER include/spdk/keyring.h 00:03:23.221 TEST_HEADER include/spdk/keyring_module.h 00:03:23.221 TEST_HEADER include/spdk/likely.h 00:03:23.221 TEST_HEADER include/spdk/log.h 00:03:23.221 TEST_HEADER include/spdk/lvol.h 00:03:23.221 TEST_HEADER include/spdk/memory.h 00:03:23.221 TEST_HEADER include/spdk/mmio.h 00:03:23.221 TEST_HEADER include/spdk/nbd.h 00:03:23.221 TEST_HEADER include/spdk/net.h 00:03:23.221 TEST_HEADER include/spdk/notify.h 00:03:23.221 TEST_HEADER include/spdk/nvme.h 00:03:23.221 TEST_HEADER include/spdk/nvme_intel.h 00:03:23.221 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:23.221 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:23.221 TEST_HEADER include/spdk/nvme_spec.h 00:03:23.221 CC test/app/bdev_svc/bdev_svc.o 00:03:23.221 CC test/dma/test_dma/test_dma.o 00:03:23.221 TEST_HEADER include/spdk/nvme_zns.h 00:03:23.221 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:23.479 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:23.479 TEST_HEADER include/spdk/nvmf.h 00:03:23.479 TEST_HEADER include/spdk/nvmf_spec.h 00:03:23.479 TEST_HEADER include/spdk/nvmf_transport.h 00:03:23.479 TEST_HEADER include/spdk/opal.h 00:03:23.479 TEST_HEADER include/spdk/opal_spec.h 00:03:23.479 TEST_HEADER include/spdk/pci_ids.h 00:03:23.479 TEST_HEADER include/spdk/pipe.h 00:03:23.479 TEST_HEADER include/spdk/queue.h 00:03:23.479 TEST_HEADER include/spdk/reduce.h 00:03:23.479 TEST_HEADER include/spdk/rpc.h 00:03:23.479 TEST_HEADER include/spdk/scheduler.h 00:03:23.479 TEST_HEADER include/spdk/scsi.h 00:03:23.479 TEST_HEADER include/spdk/scsi_spec.h 00:03:23.479 TEST_HEADER include/spdk/sock.h 00:03:23.479 CC test/env/mem_callbacks/mem_callbacks.o 00:03:23.479 TEST_HEADER include/spdk/stdinc.h 00:03:23.479 TEST_HEADER include/spdk/string.h 00:03:23.479 TEST_HEADER include/spdk/thread.h 00:03:23.479 TEST_HEADER include/spdk/trace.h 00:03:23.479 TEST_HEADER include/spdk/trace_parser.h 00:03:23.479 TEST_HEADER include/spdk/tree.h 00:03:23.479 TEST_HEADER include/spdk/ublk.h 00:03:23.479 TEST_HEADER include/spdk/util.h 00:03:23.479 TEST_HEADER include/spdk/uuid.h 00:03:23.479 TEST_HEADER include/spdk/version.h 00:03:23.479 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:23.479 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:23.479 TEST_HEADER include/spdk/vhost.h 00:03:23.479 TEST_HEADER include/spdk/vmd.h 00:03:23.479 TEST_HEADER include/spdk/xor.h 00:03:23.479 TEST_HEADER include/spdk/zipf.h 00:03:23.479 CXX test/cpp_headers/accel.o 00:03:23.479 LINK nvmf_tgt 00:03:23.479 LINK iscsi_tgt 00:03:23.479 LINK poller_perf 00:03:23.479 LINK zipf 00:03:23.479 LINK bdev_svc 00:03:23.479 LINK spdk_trace_record 00:03:23.737 CXX test/cpp_headers/accel_module.o 00:03:23.737 LINK spdk_trace 00:03:23.737 CC test/env/vtophys/vtophys.o 00:03:23.737 LINK test_dma 00:03:23.737 CXX test/cpp_headers/assert.o 00:03:23.737 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:23.737 CC examples/ioat/perf/perf.o 00:03:23.996 CC examples/vmd/lsvmd/lsvmd.o 00:03:23.996 LINK vtophys 00:03:23.996 CC examples/idxd/perf/perf.o 00:03:23.996 CXX test/cpp_headers/barrier.o 00:03:23.996 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:23.996 LINK env_dpdk_post_init 00:03:23.996 LINK mem_callbacks 00:03:23.996 CC app/spdk_tgt/spdk_tgt.o 00:03:23.996 LINK lsvmd 00:03:23.996 LINK ioat_perf 00:03:23.996 CXX test/cpp_headers/base64.o 00:03:24.254 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:24.254 CC app/spdk_lspci/spdk_lspci.o 00:03:24.254 CC app/spdk_nvme_perf/perf.o 00:03:24.254 LINK spdk_tgt 00:03:24.254 CC test/env/memory/memory_ut.o 00:03:24.254 CXX test/cpp_headers/bdev.o 00:03:24.254 CC examples/vmd/led/led.o 00:03:24.254 LINK idxd_perf 00:03:24.254 CC examples/ioat/verify/verify.o 00:03:24.254 LINK spdk_lspci 00:03:24.513 CXX test/cpp_headers/bdev_module.o 00:03:24.513 LINK led 00:03:24.513 LINK nvme_fuzz 00:03:24.513 CXX test/cpp_headers/bdev_zone.o 00:03:24.513 CXX test/cpp_headers/bit_array.o 00:03:24.513 LINK verify 00:03:24.513 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:24.513 CXX test/cpp_headers/bit_pool.o 00:03:24.771 CXX test/cpp_headers/blob_bdev.o 00:03:24.771 CC app/spdk_nvme_discover/discovery_aer.o 00:03:24.771 CC app/spdk_nvme_identify/identify.o 00:03:24.771 CC app/spdk_top/spdk_top.o 00:03:24.771 LINK interrupt_tgt 00:03:24.771 CC test/app/histogram_perf/histogram_perf.o 00:03:24.771 CXX test/cpp_headers/blobfs_bdev.o 00:03:25.029 LINK spdk_nvme_discover 00:03:25.029 LINK histogram_perf 00:03:25.029 CC examples/thread/thread/thread_ex.o 00:03:25.029 CXX test/cpp_headers/blobfs.o 00:03:25.029 CC test/env/pci/pci_ut.o 00:03:25.029 CXX test/cpp_headers/blob.o 00:03:25.287 LINK spdk_nvme_perf 00:03:25.287 LINK thread 00:03:25.287 CC test/app/jsoncat/jsoncat.o 00:03:25.287 CC examples/sock/hello_world/hello_sock.o 00:03:25.287 CXX test/cpp_headers/conf.o 00:03:25.287 LINK memory_ut 00:03:25.287 CXX test/cpp_headers/config.o 00:03:25.287 LINK jsoncat 00:03:25.287 CXX test/cpp_headers/cpuset.o 00:03:25.544 CXX test/cpp_headers/crc16.o 00:03:25.544 LINK pci_ut 00:03:25.544 LINK hello_sock 00:03:25.544 CC app/vhost/vhost.o 00:03:25.544 CXX test/cpp_headers/crc32.o 00:03:25.544 CC test/app/stub/stub.o 00:03:25.544 CC app/spdk_dd/spdk_dd.o 00:03:25.802 LINK spdk_nvme_identify 00:03:25.802 LINK vhost 00:03:25.802 LINK spdk_top 00:03:25.802 CC app/fio/nvme/fio_plugin.o 00:03:25.802 CXX test/cpp_headers/crc64.o 00:03:25.802 CXX test/cpp_headers/dif.o 00:03:25.802 LINK stub 00:03:25.802 CC examples/accel/perf/accel_perf.o 00:03:25.802 CXX test/cpp_headers/dma.o 00:03:25.802 CXX test/cpp_headers/endian.o 00:03:26.061 LINK iscsi_fuzz 00:03:26.061 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:26.061 CC app/fio/bdev/fio_plugin.o 00:03:26.061 CC test/rpc_client/rpc_client_test.o 00:03:26.061 CXX test/cpp_headers/env_dpdk.o 00:03:26.061 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:26.061 LINK spdk_dd 00:03:26.061 CC examples/blob/hello_world/hello_blob.o 00:03:26.319 CXX test/cpp_headers/env.o 00:03:26.319 CC test/accel/dif/dif.o 00:03:26.319 LINK rpc_client_test 00:03:26.319 LINK accel_perf 00:03:26.319 LINK spdk_nvme 00:03:26.319 CXX test/cpp_headers/event.o 00:03:26.319 LINK hello_blob 00:03:26.319 CC test/blobfs/mkfs/mkfs.o 00:03:26.576 CC test/event/event_perf/event_perf.o 00:03:26.576 LINK spdk_bdev 00:03:26.576 CXX test/cpp_headers/fd_group.o 00:03:26.576 LINK vhost_fuzz 00:03:26.576 LINK mkfs 00:03:26.576 CC examples/nvme/hello_world/hello_world.o 00:03:26.576 LINK event_perf 00:03:26.576 CC test/lvol/esnap/esnap.o 00:03:26.576 CXX test/cpp_headers/fd.o 00:03:26.576 CC test/nvme/aer/aer.o 00:03:26.834 CC examples/blob/cli/blobcli.o 00:03:26.834 LINK dif 00:03:26.834 CC test/event/reactor/reactor.o 00:03:26.834 CXX test/cpp_headers/file.o 00:03:26.834 LINK reactor 00:03:26.834 LINK hello_world 00:03:26.834 CC examples/bdev/hello_world/hello_bdev.o 00:03:26.834 CC test/event/reactor_perf/reactor_perf.o 00:03:26.834 CC examples/bdev/bdevperf/bdevperf.o 00:03:27.092 LINK aer 00:03:27.092 CXX test/cpp_headers/ftl.o 00:03:27.092 CC test/nvme/reset/reset.o 00:03:27.092 LINK reactor_perf 00:03:27.092 CC test/event/app_repeat/app_repeat.o 00:03:27.092 LINK hello_bdev 00:03:27.092 CC examples/nvme/reconnect/reconnect.o 00:03:27.092 CXX test/cpp_headers/gpt_spec.o 00:03:27.092 LINK blobcli 00:03:27.349 LINK app_repeat 00:03:27.349 CC test/event/scheduler/scheduler.o 00:03:27.349 LINK reset 00:03:27.349 CXX test/cpp_headers/hexlify.o 00:03:27.349 CC test/bdev/bdevio/bdevio.o 00:03:27.349 CXX test/cpp_headers/histogram_data.o 00:03:27.349 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:27.349 CXX test/cpp_headers/idxd.o 00:03:27.608 LINK scheduler 00:03:27.608 LINK reconnect 00:03:27.608 CC test/nvme/sgl/sgl.o 00:03:27.608 CC examples/nvme/arbitration/arbitration.o 00:03:27.608 CXX test/cpp_headers/idxd_spec.o 00:03:27.608 CC examples/nvme/hotplug/hotplug.o 00:03:27.608 LINK bdevperf 00:03:27.866 CXX test/cpp_headers/init.o 00:03:27.866 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:27.866 LINK bdevio 00:03:27.866 CC examples/nvme/abort/abort.o 00:03:27.866 LINK sgl 00:03:27.866 LINK hotplug 00:03:27.866 CXX test/cpp_headers/ioat.o 00:03:27.866 LINK arbitration 00:03:27.866 LINK cmb_copy 00:03:27.866 LINK nvme_manage 00:03:28.125 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:28.125 CXX test/cpp_headers/ioat_spec.o 00:03:28.125 CC test/nvme/e2edp/nvme_dp.o 00:03:28.125 CC test/nvme/overhead/overhead.o 00:03:28.125 CXX test/cpp_headers/iscsi_spec.o 00:03:28.125 CC test/nvme/err_injection/err_injection.o 00:03:28.125 CC test/nvme/startup/startup.o 00:03:28.125 LINK abort 00:03:28.125 CC test/nvme/reserve/reserve.o 00:03:28.125 LINK pmr_persistence 00:03:28.125 CXX test/cpp_headers/json.o 00:03:28.384 CC test/nvme/simple_copy/simple_copy.o 00:03:28.384 LINK err_injection 00:03:28.384 LINK startup 00:03:28.384 LINK nvme_dp 00:03:28.384 CXX test/cpp_headers/jsonrpc.o 00:03:28.384 LINK overhead 00:03:28.384 CXX test/cpp_headers/keyring.o 00:03:28.384 LINK reserve 00:03:28.643 LINK simple_copy 00:03:28.643 CXX test/cpp_headers/keyring_module.o 00:03:28.643 CC test/nvme/connect_stress/connect_stress.o 00:03:28.643 CC examples/nvmf/nvmf/nvmf.o 00:03:28.643 CC test/nvme/compliance/nvme_compliance.o 00:03:28.643 CC test/nvme/boot_partition/boot_partition.o 00:03:28.643 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:28.643 CC test/nvme/fused_ordering/fused_ordering.o 00:03:28.643 CXX test/cpp_headers/likely.o 00:03:28.643 CC test/nvme/fdp/fdp.o 00:03:28.643 LINK connect_stress 00:03:28.643 LINK boot_partition 00:03:28.643 CC test/nvme/cuse/cuse.o 00:03:28.903 CXX test/cpp_headers/log.o 00:03:28.903 LINK doorbell_aers 00:03:28.903 LINK fused_ordering 00:03:28.903 LINK nvmf 00:03:28.903 CXX test/cpp_headers/lvol.o 00:03:28.903 CXX test/cpp_headers/memory.o 00:03:28.903 CXX test/cpp_headers/mmio.o 00:03:28.903 LINK nvme_compliance 00:03:28.903 CXX test/cpp_headers/nbd.o 00:03:28.903 CXX test/cpp_headers/net.o 00:03:28.903 LINK fdp 00:03:29.163 CXX test/cpp_headers/notify.o 00:03:29.163 CXX test/cpp_headers/nvme.o 00:03:29.163 CXX test/cpp_headers/nvme_intel.o 00:03:29.163 CXX test/cpp_headers/nvme_ocssd.o 00:03:29.163 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:29.163 CXX test/cpp_headers/nvme_spec.o 00:03:29.163 CXX test/cpp_headers/nvme_zns.o 00:03:29.163 CXX test/cpp_headers/nvmf_cmd.o 00:03:29.163 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:29.163 CXX test/cpp_headers/nvmf.o 00:03:29.163 CXX test/cpp_headers/nvmf_spec.o 00:03:29.421 CXX test/cpp_headers/nvmf_transport.o 00:03:29.421 CXX test/cpp_headers/opal.o 00:03:29.421 CXX test/cpp_headers/opal_spec.o 00:03:29.421 CXX test/cpp_headers/pci_ids.o 00:03:29.421 CXX test/cpp_headers/pipe.o 00:03:29.421 CXX test/cpp_headers/queue.o 00:03:29.421 CXX test/cpp_headers/reduce.o 00:03:29.421 CXX test/cpp_headers/rpc.o 00:03:29.421 CXX test/cpp_headers/scheduler.o 00:03:29.421 CXX test/cpp_headers/scsi.o 00:03:29.421 CXX test/cpp_headers/scsi_spec.o 00:03:29.421 CXX test/cpp_headers/sock.o 00:03:29.421 CXX test/cpp_headers/stdinc.o 00:03:29.421 CXX test/cpp_headers/string.o 00:03:29.679 CXX test/cpp_headers/thread.o 00:03:29.679 CXX test/cpp_headers/trace.o 00:03:29.679 CXX test/cpp_headers/trace_parser.o 00:03:29.679 CXX test/cpp_headers/tree.o 00:03:29.679 CXX test/cpp_headers/ublk.o 00:03:29.679 CXX test/cpp_headers/util.o 00:03:29.679 CXX test/cpp_headers/version.o 00:03:29.679 CXX test/cpp_headers/uuid.o 00:03:29.679 CXX test/cpp_headers/vfio_user_pci.o 00:03:29.679 CXX test/cpp_headers/vfio_user_spec.o 00:03:29.679 CXX test/cpp_headers/vhost.o 00:03:29.679 CXX test/cpp_headers/vmd.o 00:03:29.679 CXX test/cpp_headers/xor.o 00:03:29.679 CXX test/cpp_headers/zipf.o 00:03:29.937 LINK cuse 00:03:32.470 LINK esnap 00:03:32.729 00:03:32.729 real 1m3.202s 00:03:32.729 user 5m38.523s 00:03:32.729 sys 1m40.405s 00:03:32.729 04:52:47 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:32.729 04:52:47 make -- common/autotest_common.sh@10 -- $ set +x 00:03:32.729 ************************************ 00:03:32.729 END TEST make 00:03:32.729 ************************************ 00:03:32.729 04:52:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:32.729 04:52:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:32.729 04:52:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:32.729 04:52:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.729 04:52:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:32.729 04:52:47 -- pm/common@44 -- $ pid=5192 00:03:32.730 04:52:47 -- pm/common@50 -- $ kill -TERM 5192 00:03:32.730 04:52:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.730 04:52:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:32.730 04:52:47 -- pm/common@44 -- $ pid=5194 00:03:32.730 04:52:47 -- pm/common@50 -- $ kill -TERM 5194 00:03:32.730 04:52:47 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:32.730 04:52:47 -- nvmf/common.sh@7 -- # uname -s 00:03:32.730 04:52:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:32.730 04:52:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:32.730 04:52:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:32.730 04:52:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:32.730 04:52:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:32.730 04:52:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:32.730 04:52:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:32.730 04:52:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:32.730 04:52:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:32.730 04:52:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:32.730 04:52:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a36f1e81-73a2-4b75-9a56-c42aa4d68100 00:03:32.730 04:52:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=a36f1e81-73a2-4b75-9a56-c42aa4d68100 00:03:32.730 04:52:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:32.730 04:52:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:32.730 04:52:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:32.730 04:52:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:32.730 04:52:47 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:32.730 04:52:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:32.730 04:52:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:32.730 04:52:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:32.730 04:52:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.730 04:52:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.730 04:52:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.730 04:52:47 -- paths/export.sh@5 -- # export PATH 00:03:32.730 04:52:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.730 04:52:47 -- nvmf/common.sh@47 -- # : 0 00:03:32.730 04:52:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:32.730 04:52:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:32.730 04:52:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:32.730 04:52:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:32.730 04:52:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:32.730 04:52:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:32.730 04:52:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:32.730 04:52:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:32.730 04:52:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:32.730 04:52:47 -- spdk/autotest.sh@32 -- # uname -s 00:03:32.730 04:52:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:32.730 04:52:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:32.730 04:52:47 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:32.988 04:52:47 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:32.988 04:52:47 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:32.988 04:52:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:32.988 04:52:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:32.988 04:52:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:32.988 04:52:47 -- spdk/autotest.sh@48 -- # udevadm_pid=52887 00:03:32.988 04:52:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:32.988 04:52:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:32.988 04:52:47 -- pm/common@17 -- # local monitor 00:03:32.988 04:52:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.988 04:52:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.988 04:52:47 -- pm/common@21 -- # date +%s 00:03:32.988 04:52:47 -- pm/common@25 -- # sleep 1 00:03:32.988 04:52:47 -- pm/common@21 -- # date +%s 00:03:32.988 04:52:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721796767 00:03:32.988 04:52:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721796767 00:03:32.988 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721796767_collect-vmstat.pm.log 00:03:32.988 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721796767_collect-cpu-load.pm.log 00:03:33.924 04:52:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:33.924 04:52:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:33.924 04:52:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:33.925 04:52:48 -- common/autotest_common.sh@10 -- # set +x 00:03:33.925 04:52:48 -- spdk/autotest.sh@59 -- # create_test_list 00:03:33.925 04:52:48 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:33.925 04:52:48 -- common/autotest_common.sh@10 -- # set +x 00:03:33.925 04:52:48 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:33.925 04:52:48 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:33.925 04:52:48 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:33.925 04:52:48 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:33.925 04:52:48 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:33.925 04:52:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:33.925 04:52:48 -- common/autotest_common.sh@1453 -- # uname 00:03:33.925 04:52:48 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:03:33.925 04:52:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:33.925 04:52:48 -- common/autotest_common.sh@1473 -- # uname 00:03:33.925 04:52:48 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:03:33.925 04:52:48 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:33.925 04:52:48 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:33.925 04:52:48 -- spdk/autotest.sh@72 -- # hash lcov 00:03:33.925 04:52:48 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:33.925 04:52:48 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:33.925 --rc lcov_branch_coverage=1 00:03:33.925 --rc lcov_function_coverage=1 00:03:33.925 --rc genhtml_branch_coverage=1 00:03:33.925 --rc genhtml_function_coverage=1 00:03:33.925 --rc genhtml_legend=1 00:03:33.925 --rc geninfo_all_blocks=1 00:03:33.925 ' 00:03:33.925 04:52:48 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:33.925 --rc lcov_branch_coverage=1 00:03:33.925 --rc lcov_function_coverage=1 00:03:33.925 --rc genhtml_branch_coverage=1 00:03:33.925 --rc genhtml_function_coverage=1 00:03:33.925 --rc genhtml_legend=1 00:03:33.925 --rc geninfo_all_blocks=1 00:03:33.925 ' 00:03:33.925 04:52:48 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:33.925 --rc lcov_branch_coverage=1 00:03:33.925 --rc lcov_function_coverage=1 00:03:33.925 --rc genhtml_branch_coverage=1 00:03:33.925 --rc genhtml_function_coverage=1 00:03:33.925 --rc genhtml_legend=1 00:03:33.925 --rc geninfo_all_blocks=1 00:03:33.925 --no-external' 00:03:33.925 04:52:48 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:33.925 --rc lcov_branch_coverage=1 00:03:33.925 --rc lcov_function_coverage=1 00:03:33.925 --rc genhtml_branch_coverage=1 00:03:33.925 --rc genhtml_function_coverage=1 00:03:33.925 --rc genhtml_legend=1 00:03:33.925 --rc geninfo_all_blocks=1 00:03:33.925 --no-external' 00:03:33.925 04:52:48 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:34.185 lcov: LCOV version 1.14 00:03:34.185 04:52:48 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:49.061 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:49.061 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:59.035 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:59.035 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:59.036 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:59.036 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:01.567 04:53:15 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:01.567 04:53:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:01.567 04:53:15 -- common/autotest_common.sh@10 -- # set +x 00:04:01.567 04:53:15 -- spdk/autotest.sh@91 -- # rm -f 00:04:01.567 04:53:15 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.136 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:02.136 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:02.136 04:53:16 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:02.136 04:53:16 -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:04:02.136 04:53:16 -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:04:02.136 04:53:16 -- common/autotest_common.sh@1668 -- # local nvme bdf 00:04:02.136 04:53:16 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:02.136 04:53:16 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:02.136 04:53:16 -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:04:02.136 04:53:16 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:02.136 04:53:16 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:02.136 04:53:16 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:02.136 04:53:16 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:02.136 04:53:16 -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:04:02.136 04:53:16 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:02.136 04:53:16 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:02.136 04:53:16 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:02.136 04:53:16 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:02.136 04:53:16 -- common/autotest_common.sh@1660 -- # local device=nvme1n2 00:04:02.136 04:53:16 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:02.136 04:53:16 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:02.136 04:53:16 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:02.136 04:53:16 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:02.136 04:53:16 -- common/autotest_common.sh@1660 -- # local device=nvme1n3 00:04:02.136 04:53:16 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:02.136 04:53:16 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:02.136 04:53:16 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:02.136 04:53:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:02.136 04:53:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:02.136 04:53:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:02.136 04:53:16 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:02.136 04:53:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:02.136 No valid GPT data, bailing 00:04:02.136 04:53:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:02.136 04:53:16 -- scripts/common.sh@391 -- # pt= 00:04:02.136 04:53:16 -- scripts/common.sh@392 -- # return 1 00:04:02.136 04:53:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:02.136 1+0 records in 00:04:02.136 1+0 records out 00:04:02.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00635915 s, 165 MB/s 00:04:02.136 04:53:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:02.136 04:53:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:02.136 04:53:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:02.136 04:53:16 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:02.136 04:53:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:02.395 No valid GPT data, bailing 00:04:02.395 04:53:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:02.395 04:53:16 -- scripts/common.sh@391 -- # pt= 00:04:02.395 04:53:16 -- scripts/common.sh@392 -- # return 1 00:04:02.395 04:53:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:02.395 1+0 records in 00:04:02.395 1+0 records out 00:04:02.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00552314 s, 190 MB/s 00:04:02.395 04:53:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:02.395 04:53:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:02.395 04:53:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:02.395 04:53:16 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:02.395 04:53:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:02.395 No valid GPT data, bailing 00:04:02.395 04:53:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:02.395 04:53:16 -- scripts/common.sh@391 -- # pt= 00:04:02.395 04:53:16 -- scripts/common.sh@392 -- # return 1 00:04:02.395 04:53:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:02.395 1+0 records in 00:04:02.395 1+0 records out 00:04:02.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00600879 s, 175 MB/s 00:04:02.395 04:53:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:02.395 04:53:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:02.395 04:53:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:02.395 04:53:16 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:02.396 04:53:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:02.396 No valid GPT data, bailing 00:04:02.396 04:53:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:02.396 04:53:16 -- scripts/common.sh@391 -- # pt= 00:04:02.396 04:53:16 -- scripts/common.sh@392 -- # return 1 00:04:02.396 04:53:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:02.396 1+0 records in 00:04:02.396 1+0 records out 00:04:02.396 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00544524 s, 193 MB/s 00:04:02.396 04:53:17 -- spdk/autotest.sh@118 -- # sync 00:04:02.655 04:53:17 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:02.655 04:53:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:02.655 04:53:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:05.189 04:53:19 -- spdk/autotest.sh@124 -- # uname -s 00:04:05.189 04:53:19 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:05.189 04:53:19 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:05.189 04:53:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.189 04:53:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.189 04:53:19 -- common/autotest_common.sh@10 -- # set +x 00:04:05.189 ************************************ 00:04:05.189 START TEST setup.sh 00:04:05.189 ************************************ 00:04:05.189 04:53:19 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:05.189 * Looking for test storage... 00:04:05.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:05.189 04:53:19 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:05.189 04:53:19 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:05.189 04:53:19 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:05.189 04:53:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.189 04:53:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.189 04:53:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.189 ************************************ 00:04:05.189 START TEST acl 00:04:05.189 ************************************ 00:04:05.189 04:53:19 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:05.189 * Looking for test storage... 00:04:05.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:05.189 04:53:19 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:05.189 04:53:19 setup.sh.acl -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:04:05.189 04:53:19 setup.sh.acl -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:04:05.189 04:53:19 setup.sh.acl -- common/autotest_common.sh@1668 -- # local nvme bdf 00:04:05.189 04:53:19 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:05.189 04:53:19 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:05.189 04:53:19 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:04:05.189 04:53:19 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme1n2 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme1n3 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:05.190 04:53:19 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:05.190 04:53:19 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:05.190 04:53:19 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:05.190 04:53:19 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:05.190 04:53:19 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:05.190 04:53:19 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:05.190 04:53:19 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.190 04:53:19 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.130 04:53:20 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:06.130 04:53:20 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:06.130 04:53:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.130 04:53:20 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:06.130 04:53:20 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.130 04:53:20 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:06.699 04:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:06.699 04:53:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:06.699 04:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.699 Hugepages 00:04:06.699 node hugesize free / total 00:04:06.699 04:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:06.699 04:53:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:06.699 04:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.699 00:04:06.699 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:06.699 04:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:06.699 04:53:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:06.699 04:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.699 04:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:06.699 04:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:06.699 04:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.699 04:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.958 04:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:06.958 04:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:06.958 04:53:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:06.958 04:53:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:06.958 04:53:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:06.958 04:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.958 04:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:06.958 04:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:06.958 04:53:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:06.958 04:53:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:06.958 04:53:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:06.958 04:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.958 04:53:21 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:06.958 04:53:21 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:06.958 04:53:21 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.958 04:53:21 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.958 04:53:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:06.958 ************************************ 00:04:06.958 START TEST denied 00:04:06.958 ************************************ 00:04:06.958 04:53:21 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:06.958 04:53:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:06.958 04:53:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:06.958 04:53:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.958 04:53:21 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:06.958 04:53:21 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:07.894 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:07.894 04:53:22 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:07.894 04:53:22 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:07.894 04:53:22 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:07.894 04:53:22 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:07.894 04:53:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:07.894 04:53:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:07.894 04:53:22 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:07.895 04:53:22 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:07.895 04:53:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.895 04:53:22 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.832 00:04:08.832 real 0m1.644s 00:04:08.832 user 0m0.626s 00:04:08.832 sys 0m0.983s 00:04:08.832 04:53:23 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.832 04:53:23 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:08.832 ************************************ 00:04:08.832 END TEST denied 00:04:08.832 ************************************ 00:04:08.832 04:53:23 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:08.832 04:53:23 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.832 04:53:23 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.832 04:53:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:08.832 ************************************ 00:04:08.832 START TEST allowed 00:04:08.832 ************************************ 00:04:08.832 04:53:23 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:08.832 04:53:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:08.832 04:53:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:08.832 04:53:23 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:08.832 04:53:23 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.832 04:53:23 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:09.770 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.770 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:09.770 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:09.770 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:09.770 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:09.770 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:09.770 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:09.770 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:09.770 04:53:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:09.770 04:53:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.770 04:53:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:10.337 00:04:10.337 real 0m1.741s 00:04:10.337 user 0m0.707s 00:04:10.337 sys 0m1.049s 00:04:10.337 04:53:24 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.337 ************************************ 00:04:10.337 END TEST allowed 00:04:10.337 04:53:24 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:10.337 ************************************ 00:04:10.337 ************************************ 00:04:10.337 END TEST acl 00:04:10.337 ************************************ 00:04:10.337 00:04:10.337 real 0m5.449s 00:04:10.337 user 0m2.243s 00:04:10.337 sys 0m3.194s 00:04:10.337 04:53:24 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.337 04:53:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:10.598 04:53:24 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:10.598 04:53:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.598 04:53:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.598 04:53:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.598 ************************************ 00:04:10.598 START TEST hugepages 00:04:10.598 ************************************ 00:04:10.598 04:53:25 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:10.598 * Looking for test storage... 00:04:10.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5842276 kB' 'MemAvailable: 7407296 kB' 'Buffers: 2436 kB' 'Cached: 1779020 kB' 'SwapCached: 0 kB' 'Active: 435556 kB' 'Inactive: 1450900 kB' 'Active(anon): 115488 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 106932 kB' 'Mapped: 48588 kB' 'Shmem: 10488 kB' 'KReclaimable: 61984 kB' 'Slab: 136672 kB' 'SReclaimable: 61984 kB' 'SUnreclaim: 74688 kB' 'KernelStack: 6380 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 337840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.598 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:10.600 04:53:25 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:10.600 04:53:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.600 04:53:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.600 04:53:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.600 ************************************ 00:04:10.600 START TEST default_setup 00:04:10.600 ************************************ 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.600 04:53:25 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.539 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.539 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:11.539 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7906308 kB' 'MemAvailable: 9471120 kB' 'Buffers: 2436 kB' 'Cached: 1779008 kB' 'SwapCached: 0 kB' 'Active: 452320 kB' 'Inactive: 1450904 kB' 'Active(anon): 132252 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123432 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136200 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74640 kB' 'KernelStack: 6352 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.539 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.540 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7906560 kB' 'MemAvailable: 9471372 kB' 'Buffers: 2436 kB' 'Cached: 1779008 kB' 'SwapCached: 0 kB' 'Active: 451944 kB' 'Inactive: 1450904 kB' 'Active(anon): 131876 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123064 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136192 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74632 kB' 'KernelStack: 6304 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.802 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.803 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7906560 kB' 'MemAvailable: 9471372 kB' 'Buffers: 2436 kB' 'Cached: 1779008 kB' 'SwapCached: 0 kB' 'Active: 451976 kB' 'Inactive: 1450904 kB' 'Active(anon): 131908 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123084 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136192 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74632 kB' 'KernelStack: 6304 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.804 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.805 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.806 nr_hugepages=1024 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.806 resv_hugepages=0 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.806 surplus_hugepages=0 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.806 anon_hugepages=0 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7906560 kB' 'MemAvailable: 9471372 kB' 'Buffers: 2436 kB' 'Cached: 1779008 kB' 'SwapCached: 0 kB' 'Active: 451976 kB' 'Inactive: 1450904 kB' 'Active(anon): 131908 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123084 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136192 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74632 kB' 'KernelStack: 6304 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.806 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.807 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7907156 kB' 'MemUsed: 4334824 kB' 'SwapCached: 0 kB' 'Active: 451980 kB' 'Inactive: 1450904 kB' 'Active(anon): 131912 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1781444 kB' 'Mapped: 48604 kB' 'AnonPages: 123084 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61560 kB' 'Slab: 136192 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.808 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.809 node0=1024 expecting 1024 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.809 00:04:11.809 real 0m1.126s 00:04:11.809 user 0m0.461s 00:04:11.809 sys 0m0.619s 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.809 04:53:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:11.809 ************************************ 00:04:11.809 END TEST default_setup 00:04:11.809 ************************************ 00:04:11.809 04:53:26 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:11.809 04:53:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.809 04:53:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.809 04:53:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.809 ************************************ 00:04:11.809 START TEST per_node_1G_alloc 00:04:11.809 ************************************ 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:11.809 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:11.810 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:11.810 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:11.810 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:11.810 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.810 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.383 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.383 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.383 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8960940 kB' 'MemAvailable: 10525760 kB' 'Buffers: 2436 kB' 'Cached: 1779008 kB' 'SwapCached: 0 kB' 'Active: 452544 kB' 'Inactive: 1450912 kB' 'Active(anon): 132476 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123596 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136184 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74624 kB' 'KernelStack: 6344 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.384 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8961200 kB' 'MemAvailable: 10526020 kB' 'Buffers: 2436 kB' 'Cached: 1779008 kB' 'SwapCached: 0 kB' 'Active: 452192 kB' 'Inactive: 1450912 kB' 'Active(anon): 132124 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 123244 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136180 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74620 kB' 'KernelStack: 6296 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.385 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.386 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8960952 kB' 'MemAvailable: 10525772 kB' 'Buffers: 2436 kB' 'Cached: 1779008 kB' 'SwapCached: 0 kB' 'Active: 452260 kB' 'Inactive: 1450912 kB' 'Active(anon): 132192 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 123300 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136196 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74636 kB' 'KernelStack: 6304 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.387 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.388 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:12.389 nr_hugepages=512 00:04:12.389 resv_hugepages=0 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.389 surplus_hugepages=0 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.389 anon_hugepages=0 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8960700 kB' 'MemAvailable: 10525520 kB' 'Buffers: 2436 kB' 'Cached: 1779008 kB' 'SwapCached: 0 kB' 'Active: 452240 kB' 'Inactive: 1450912 kB' 'Active(anon): 132172 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 123280 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136192 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74632 kB' 'KernelStack: 6304 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.389 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.390 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8961348 kB' 'MemUsed: 3280632 kB' 'SwapCached: 0 kB' 'Active: 452156 kB' 'Inactive: 1450912 kB' 'Active(anon): 132088 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1781444 kB' 'Mapped: 48628 kB' 'AnonPages: 123224 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61560 kB' 'Slab: 136192 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.391 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.392 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:12.393 node0=512 expecting 512 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:12.393 00:04:12.393 real 0m0.612s 00:04:12.393 user 0m0.279s 00:04:12.393 sys 0m0.373s 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.393 04:53:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:12.393 ************************************ 00:04:12.393 END TEST per_node_1G_alloc 00:04:12.393 ************************************ 00:04:12.652 04:53:27 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:12.652 04:53:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.652 04:53:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.652 04:53:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.652 ************************************ 00:04:12.652 START TEST even_2G_alloc 00:04:12.652 ************************************ 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.652 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.913 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.913 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7915068 kB' 'MemAvailable: 9479888 kB' 'Buffers: 2436 kB' 'Cached: 1779008 kB' 'SwapCached: 0 kB' 'Active: 452124 kB' 'Inactive: 1450912 kB' 'Active(anon): 132056 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 123204 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136200 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74640 kB' 'KernelStack: 6312 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.913 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.914 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7914816 kB' 'MemAvailable: 9479636 kB' 'Buffers: 2436 kB' 'Cached: 1779008 kB' 'SwapCached: 0 kB' 'Active: 452044 kB' 'Inactive: 1450912 kB' 'Active(anon): 131976 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123100 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136200 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74640 kB' 'KernelStack: 6288 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.915 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.179 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.180 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7914904 kB' 'MemAvailable: 9479724 kB' 'Buffers: 2436 kB' 'Cached: 1779008 kB' 'SwapCached: 0 kB' 'Active: 452224 kB' 'Inactive: 1450912 kB' 'Active(anon): 132156 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123280 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136200 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74640 kB' 'KernelStack: 6256 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.181 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.182 nr_hugepages=1024 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.182 resv_hugepages=0 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.182 surplus_hugepages=0 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.182 anon_hugepages=0 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7915280 kB' 'MemAvailable: 9480100 kB' 'Buffers: 2436 kB' 'Cached: 1779008 kB' 'SwapCached: 0 kB' 'Active: 452220 kB' 'Inactive: 1450912 kB' 'Active(anon): 132152 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123252 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136196 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74636 kB' 'KernelStack: 6324 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.182 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.183 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.184 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7915280 kB' 'MemUsed: 4326700 kB' 'SwapCached: 0 kB' 'Active: 452084 kB' 'Inactive: 1450912 kB' 'Active(anon): 132016 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 1781444 kB' 'Mapped: 48608 kB' 'AnonPages: 123124 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61560 kB' 'Slab: 136196 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.185 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.186 node0=1024 expecting 1024 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.186 00:04:13.186 real 0m0.608s 00:04:13.186 user 0m0.300s 00:04:13.186 sys 0m0.353s 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.186 04:53:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.186 ************************************ 00:04:13.186 END TEST even_2G_alloc 00:04:13.186 ************************************ 00:04:13.186 04:53:27 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:13.186 04:53:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.186 04:53:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.186 04:53:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.186 ************************************ 00:04:13.186 START TEST odd_alloc 00:04:13.186 ************************************ 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.186 04:53:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.759 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.759 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.759 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:13.759 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.759 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.759 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.759 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.759 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910636 kB' 'MemAvailable: 9475460 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 452420 kB' 'Inactive: 1450916 kB' 'Active(anon): 132352 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123200 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136216 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74656 kB' 'KernelStack: 6336 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.760 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910636 kB' 'MemAvailable: 9475460 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 452184 kB' 'Inactive: 1450916 kB' 'Active(anon): 132116 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123264 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136216 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74656 kB' 'KernelStack: 6336 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.761 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.762 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910636 kB' 'MemAvailable: 9475460 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 451864 kB' 'Inactive: 1450916 kB' 'Active(anon): 131796 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123204 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136216 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74656 kB' 'KernelStack: 6320 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.763 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.764 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:13.765 nr_hugepages=1025 00:04:13.765 resv_hugepages=0 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.765 surplus_hugepages=0 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.765 anon_hugepages=0 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910636 kB' 'MemAvailable: 9475460 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 452096 kB' 'Inactive: 1450916 kB' 'Active(anon): 132028 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 123140 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136216 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74656 kB' 'KernelStack: 6304 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.765 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.766 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910636 kB' 'MemUsed: 4331344 kB' 'SwapCached: 0 kB' 'Active: 451884 kB' 'Inactive: 1450916 kB' 'Active(anon): 131816 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 1781448 kB' 'Mapped: 48612 kB' 'AnonPages: 123192 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61560 kB' 'Slab: 136216 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.767 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.768 node0=1025 expecting 1025 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:13.768 00:04:13.768 real 0m0.612s 00:04:13.768 user 0m0.277s 00:04:13.768 sys 0m0.379s 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.768 04:53:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.768 ************************************ 00:04:13.768 END TEST odd_alloc 00:04:13.768 ************************************ 00:04:13.768 04:53:28 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:13.768 04:53:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.768 04:53:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.768 04:53:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.027 ************************************ 00:04:14.027 START TEST custom_alloc 00:04:14.027 ************************************ 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.027 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:14.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.288 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.288 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8978012 kB' 'MemAvailable: 10542836 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 452176 kB' 'Inactive: 1450916 kB' 'Active(anon): 132108 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 123528 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136260 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74700 kB' 'KernelStack: 6292 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.288 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.289 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8977736 kB' 'MemAvailable: 10542560 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 451912 kB' 'Inactive: 1450916 kB' 'Active(anon): 131844 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 123180 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136280 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74720 kB' 'KernelStack: 6304 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.290 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8977736 kB' 'MemAvailable: 10542560 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 451936 kB' 'Inactive: 1450916 kB' 'Active(anon): 131868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 123240 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136276 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74716 kB' 'KernelStack: 6320 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.291 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.292 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.554 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:14.555 nr_hugepages=512 00:04:14.555 resv_hugepages=0 00:04:14.555 surplus_hugepages=0 00:04:14.555 anon_hugepages=0 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8977736 kB' 'MemAvailable: 10542560 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 451896 kB' 'Inactive: 1450916 kB' 'Active(anon): 131828 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 123164 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136272 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74712 kB' 'KernelStack: 6304 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.555 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.556 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8977736 kB' 'MemUsed: 3264244 kB' 'SwapCached: 0 kB' 'Active: 452124 kB' 'Inactive: 1450916 kB' 'Active(anon): 132056 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1781448 kB' 'Mapped: 48608 kB' 'AnonPages: 123164 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61560 kB' 'Slab: 136268 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.557 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.558 node0=512 expecting 512 00:04:14.558 ************************************ 00:04:14.558 END TEST custom_alloc 00:04:14.558 ************************************ 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:14.558 04:53:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:14.559 00:04:14.559 real 0m0.630s 00:04:14.559 user 0m0.294s 00:04:14.559 sys 0m0.350s 00:04:14.559 04:53:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.559 04:53:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.559 04:53:29 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:14.559 04:53:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.559 04:53:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.559 04:53:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.559 ************************************ 00:04:14.559 START TEST no_shrink_alloc 00:04:14.559 ************************************ 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.559 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.133 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.133 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.133 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7930736 kB' 'MemAvailable: 9495560 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 447012 kB' 'Inactive: 1450916 kB' 'Active(anon): 126944 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 118324 kB' 'Mapped: 47932 kB' 'Shmem: 10464 kB' 'KReclaimable: 61560 kB' 'Slab: 136132 kB' 'SReclaimable: 61560 kB' 'SUnreclaim: 74572 kB' 'KernelStack: 6196 kB' 'PageTables: 3604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.134 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7930736 kB' 'MemAvailable: 9495560 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 447216 kB' 'Inactive: 1450916 kB' 'Active(anon): 127148 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 118256 kB' 'Mapped: 47868 kB' 'Shmem: 10464 kB' 'KReclaimable: 61556 kB' 'Slab: 136028 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 74472 kB' 'KernelStack: 6192 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.136 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7937940 kB' 'MemAvailable: 9502764 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 446940 kB' 'Inactive: 1450916 kB' 'Active(anon): 126872 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 118236 kB' 'Mapped: 47868 kB' 'Shmem: 10464 kB' 'KReclaimable: 61556 kB' 'Slab: 136028 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 74472 kB' 'KernelStack: 6192 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.137 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.138 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.139 nr_hugepages=1024 00:04:15.139 resv_hugepages=0 00:04:15.139 surplus_hugepages=0 00:04:15.139 anon_hugepages=0 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7937184 kB' 'MemAvailable: 9502008 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 446936 kB' 'Inactive: 1450916 kB' 'Active(anon): 126868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 118236 kB' 'Mapped: 47868 kB' 'Shmem: 10464 kB' 'KReclaimable: 61556 kB' 'Slab: 136028 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 74472 kB' 'KernelStack: 6192 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.140 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7937184 kB' 'MemUsed: 4304796 kB' 'SwapCached: 0 kB' 'Active: 446960 kB' 'Inactive: 1450916 kB' 'Active(anon): 126892 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'FilePages: 1781448 kB' 'Mapped: 47868 kB' 'AnonPages: 118256 kB' 'Shmem: 10464 kB' 'KernelStack: 6192 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61556 kB' 'Slab: 136016 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 74460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:15.142 node0=1024 expecting 1024 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.142 04:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.717 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.717 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.717 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.717 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.717 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7933908 kB' 'MemAvailable: 9498732 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 447584 kB' 'Inactive: 1450916 kB' 'Active(anon): 127516 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118656 kB' 'Mapped: 48048 kB' 'Shmem: 10464 kB' 'KReclaimable: 61556 kB' 'Slab: 135972 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 74416 kB' 'KernelStack: 6312 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.718 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7933660 kB' 'MemAvailable: 9498484 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 447120 kB' 'Inactive: 1450916 kB' 'Active(anon): 127052 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118420 kB' 'Mapped: 47928 kB' 'Shmem: 10464 kB' 'KReclaimable: 61556 kB' 'Slab: 135980 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 74424 kB' 'KernelStack: 6224 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.719 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.720 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7933660 kB' 'MemAvailable: 9498484 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 447016 kB' 'Inactive: 1450916 kB' 'Active(anon): 126948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118352 kB' 'Mapped: 47868 kB' 'Shmem: 10464 kB' 'KReclaimable: 61556 kB' 'Slab: 135980 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 74424 kB' 'KernelStack: 6208 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.721 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.722 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.723 nr_hugepages=1024 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.723 resv_hugepages=0 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.723 surplus_hugepages=0 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.723 anon_hugepages=0 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7933660 kB' 'MemAvailable: 9498484 kB' 'Buffers: 2436 kB' 'Cached: 1779012 kB' 'SwapCached: 0 kB' 'Active: 446956 kB' 'Inactive: 1450916 kB' 'Active(anon): 126888 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118260 kB' 'Mapped: 47868 kB' 'Shmem: 10464 kB' 'KReclaimable: 61556 kB' 'Slab: 135980 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 74424 kB' 'KernelStack: 6192 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.723 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.724 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.725 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7933660 kB' 'MemUsed: 4308320 kB' 'SwapCached: 0 kB' 'Active: 446964 kB' 'Inactive: 1450916 kB' 'Active(anon): 126896 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1450916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 1781448 kB' 'Mapped: 47868 kB' 'AnonPages: 118260 kB' 'Shmem: 10464 kB' 'KernelStack: 6192 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61556 kB' 'Slab: 135980 kB' 'SReclaimable: 61556 kB' 'SUnreclaim: 74424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.048 node0=1024 expecting 1024 00:04:16.048 ************************************ 00:04:16.048 END TEST no_shrink_alloc 00:04:16.048 ************************************ 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.048 00:04:16.048 real 0m1.282s 00:04:16.048 user 0m0.615s 00:04:16.048 sys 0m0.695s 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.048 04:53:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:16.048 04:53:30 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:16.048 04:53:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:16.048 04:53:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:16.048 04:53:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.048 04:53:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:16.048 04:53:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.048 04:53:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:16.048 04:53:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:16.048 04:53:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:16.048 00:04:16.048 real 0m5.416s 00:04:16.048 user 0m2.429s 00:04:16.048 sys 0m3.086s 00:04:16.048 ************************************ 00:04:16.048 END TEST hugepages 00:04:16.048 ************************************ 00:04:16.048 04:53:30 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.048 04:53:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.048 04:53:30 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:16.048 04:53:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.049 04:53:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.049 04:53:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:16.049 ************************************ 00:04:16.049 START TEST driver 00:04:16.049 ************************************ 00:04:16.049 04:53:30 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:16.049 * Looking for test storage... 00:04:16.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:16.049 04:53:30 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:16.049 04:53:30 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.049 04:53:30 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:16.986 04:53:31 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:16.986 04:53:31 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.986 04:53:31 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.986 04:53:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.986 ************************************ 00:04:16.986 START TEST guess_driver 00:04:16.986 ************************************ 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:16.986 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:16.986 Looking for driver=uio_pci_generic 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.986 04:53:31 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:17.554 04:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:17.554 04:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:17.554 04:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.554 04:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.554 04:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:17.554 04:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.813 04:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.813 04:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:17.813 04:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.813 04:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:17.813 04:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:17.813 04:53:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.813 04:53:32 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.382 00:04:18.382 real 0m1.683s 00:04:18.382 user 0m0.579s 00:04:18.382 sys 0m1.136s 00:04:18.382 04:53:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.382 ************************************ 00:04:18.382 END TEST guess_driver 00:04:18.382 04:53:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:18.382 ************************************ 00:04:18.382 ************************************ 00:04:18.382 END TEST driver 00:04:18.382 ************************************ 00:04:18.382 00:04:18.382 real 0m2.529s 00:04:18.382 user 0m0.850s 00:04:18.382 sys 0m1.786s 00:04:18.382 04:53:33 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.382 04:53:33 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:18.642 04:53:33 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:18.642 04:53:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.642 04:53:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.642 04:53:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:18.642 ************************************ 00:04:18.642 START TEST devices 00:04:18.642 ************************************ 00:04:18.642 04:53:33 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:18.642 * Looking for test storage... 00:04:18.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:18.642 04:53:33 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:18.642 04:53:33 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:18.642 04:53:33 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.642 04:53:33 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1668 -- # local nvme bdf 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n2 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n3 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme1n1 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:19.581 04:53:34 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:19.581 04:53:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:19.581 04:53:34 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:19.581 No valid GPT data, bailing 00:04:19.581 04:53:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.581 04:53:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:19.581 04:53:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:19.581 04:53:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:19.581 04:53:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:19.581 04:53:34 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:19.581 04:53:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:19.581 04:53:34 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:19.581 No valid GPT data, bailing 00:04:19.581 04:53:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:19.581 04:53:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:19.581 04:53:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:19.581 04:53:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:19.581 04:53:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:19.581 04:53:34 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:19.581 04:53:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:19.581 04:53:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:19.581 04:53:34 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:19.841 No valid GPT data, bailing 00:04:19.841 04:53:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:19.841 04:53:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:19.841 04:53:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:19.841 04:53:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:19.841 04:53:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:19.841 04:53:34 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:19.841 04:53:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:19.841 04:53:34 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:19.841 No valid GPT data, bailing 00:04:19.841 04:53:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:19.841 04:53:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:19.841 04:53:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:19.841 04:53:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:19.841 04:53:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:19.841 04:53:34 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:19.841 04:53:34 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:19.841 04:53:34 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.841 04:53:34 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.841 04:53:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:19.841 ************************************ 00:04:19.841 START TEST nvme_mount 00:04:19.841 ************************************ 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:19.841 04:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:20.776 Creating new GPT entries in memory. 00:04:20.776 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:20.776 other utilities. 00:04:20.777 04:53:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:20.777 04:53:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.777 04:53:35 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:20.777 04:53:35 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:20.777 04:53:35 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:22.154 Creating new GPT entries in memory. 00:04:22.154 The operation has completed successfully. 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57098 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.154 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.413 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.413 04:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.413 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.413 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.673 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.673 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:22.673 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.673 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.673 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.673 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:22.673 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.673 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.673 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.673 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:22.673 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:22.673 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.673 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.932 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:22.932 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:22.932 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:22.932 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.932 04:53:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.191 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.191 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:23.191 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:23.191 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.191 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.191 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.451 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.451 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.451 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.451 04:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.451 04:53:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:24.020 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.020 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:24.020 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.020 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.020 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.020 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.020 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.020 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.020 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.020 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.279 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.279 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:24.279 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:24.280 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:24.280 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.280 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.280 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.280 04:53:38 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.280 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.280 00:04:24.280 real 0m4.328s 00:04:24.280 user 0m0.815s 00:04:24.280 sys 0m1.234s 00:04:24.280 04:53:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.280 ************************************ 00:04:24.280 END TEST nvme_mount 00:04:24.280 ************************************ 00:04:24.280 04:53:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:24.280 04:53:38 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:24.280 04:53:38 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.280 04:53:38 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.280 04:53:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:24.280 ************************************ 00:04:24.280 START TEST dm_mount 00:04:24.280 ************************************ 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:24.280 04:53:38 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:25.217 Creating new GPT entries in memory. 00:04:25.217 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.217 other utilities. 00:04:25.217 04:53:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.217 04:53:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.217 04:53:39 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.217 04:53:39 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.217 04:53:39 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:26.595 Creating new GPT entries in memory. 00:04:26.595 The operation has completed successfully. 00:04:26.595 04:53:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:26.595 04:53:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.595 04:53:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.595 04:53:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.595 04:53:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:27.530 The operation has completed successfully. 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57540 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:27.530 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.531 04:53:41 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.789 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.789 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:27.789 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:27.789 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.789 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.789 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.789 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.789 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.048 04:53:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:28.308 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.308 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:28.308 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:28.308 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.308 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.308 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.566 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.566 04:53:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.566 04:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.566 04:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.566 04:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.566 04:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:28.566 04:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:28.566 04:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:28.566 04:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.566 04:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:28.566 04:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:28.566 04:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.566 04:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:28.566 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.566 04:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:28.566 04:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:28.825 00:04:28.825 real 0m4.460s 00:04:28.825 user 0m0.529s 00:04:28.825 sys 0m0.874s 00:04:28.825 04:53:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.825 04:53:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:28.825 ************************************ 00:04:28.825 END TEST dm_mount 00:04:28.825 ************************************ 00:04:28.825 04:53:43 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:28.825 04:53:43 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:28.825 04:53:43 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.825 04:53:43 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.825 04:53:43 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:28.825 04:53:43 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.825 04:53:43 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.084 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:29.084 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:29.084 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:29.084 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:29.084 04:53:43 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:29.084 04:53:43 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.084 04:53:43 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:29.084 04:53:43 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.084 04:53:43 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:29.084 04:53:43 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.084 04:53:43 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:29.084 00:04:29.084 real 0m10.483s 00:04:29.084 user 0m2.034s 00:04:29.084 sys 0m2.818s 00:04:29.084 04:53:43 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.084 04:53:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:29.084 ************************************ 00:04:29.084 END TEST devices 00:04:29.084 ************************************ 00:04:29.084 00:04:29.084 real 0m24.221s 00:04:29.084 user 0m7.667s 00:04:29.084 sys 0m11.114s 00:04:29.084 04:53:43 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.084 04:53:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.084 ************************************ 00:04:29.084 END TEST setup.sh 00:04:29.084 ************************************ 00:04:29.084 04:53:43 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:30.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.020 Hugepages 00:04:30.020 node hugesize free / total 00:04:30.020 node0 1048576kB 0 / 0 00:04:30.020 node0 2048kB 2048 / 2048 00:04:30.020 00:04:30.020 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:30.020 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:30.020 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:30.278 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:30.278 04:53:44 -- spdk/autotest.sh@130 -- # uname -s 00:04:30.278 04:53:44 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:30.278 04:53:44 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:30.278 04:53:44 -- common/autotest_common.sh@1529 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.846 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.105 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.105 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.105 04:53:45 -- common/autotest_common.sh@1530 -- # sleep 1 00:04:32.042 04:53:46 -- common/autotest_common.sh@1531 -- # bdfs=() 00:04:32.042 04:53:46 -- common/autotest_common.sh@1531 -- # local bdfs 00:04:32.042 04:53:46 -- common/autotest_common.sh@1532 -- # bdfs=($(get_nvme_bdfs)) 00:04:32.042 04:53:46 -- common/autotest_common.sh@1532 -- # get_nvme_bdfs 00:04:32.042 04:53:46 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:32.042 04:53:46 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:32.042 04:53:46 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.042 04:53:46 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:32.042 04:53:46 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:32.301 04:53:46 -- common/autotest_common.sh@1513 -- # (( 2 == 0 )) 00:04:32.301 04:53:46 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:32.301 04:53:46 -- common/autotest_common.sh@1534 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:32.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.560 Waiting for block devices as requested 00:04:32.820 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:32.820 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:32.820 04:53:47 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:04:32.820 04:53:47 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:32.820 04:53:47 -- common/autotest_common.sh@1500 -- # grep 0000:00:10.0/nvme/nvme 00:04:32.820 04:53:47 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:32.820 04:53:47 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:32.820 04:53:47 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:32.820 04:53:47 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:32.820 04:53:47 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme1 00:04:32.820 04:53:47 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme1 00:04:32.820 04:53:47 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme1 ]] 00:04:32.820 04:53:47 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme1 00:04:32.820 04:53:47 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:04:32.820 04:53:47 -- common/autotest_common.sh@1543 -- # grep oacs 00:04:32.820 04:53:47 -- common/autotest_common.sh@1543 -- # oacs=' 0x12a' 00:04:32.820 04:53:47 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:04:32.820 04:53:47 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:04:32.820 04:53:47 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:04:32.820 04:53:47 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme1 00:04:32.820 04:53:47 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:04:32.820 04:53:47 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:04:32.820 04:53:47 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:04:32.820 04:53:47 -- common/autotest_common.sh@1555 -- # continue 00:04:32.820 04:53:47 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:04:32.820 04:53:47 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:33.079 04:53:47 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:33.079 04:53:47 -- common/autotest_common.sh@1500 -- # grep 0000:00:11.0/nvme/nvme 00:04:33.079 04:53:47 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:33.079 04:53:47 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:33.079 04:53:47 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:33.079 04:53:47 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme0 00:04:33.079 04:53:47 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme0 00:04:33.079 04:53:47 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme0 ]] 00:04:33.079 04:53:47 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme0 00:04:33.079 04:53:47 -- common/autotest_common.sh@1543 -- # grep oacs 00:04:33.079 04:53:47 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:04:33.079 04:53:47 -- common/autotest_common.sh@1543 -- # oacs=' 0x12a' 00:04:33.079 04:53:47 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:04:33.079 04:53:47 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:04:33.079 04:53:47 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme0 00:04:33.079 04:53:47 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:04:33.079 04:53:47 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:04:33.079 04:53:47 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:04:33.079 04:53:47 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:04:33.079 04:53:47 -- common/autotest_common.sh@1555 -- # continue 00:04:33.079 04:53:47 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:33.079 04:53:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.079 04:53:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.079 04:53:47 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:33.079 04:53:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.079 04:53:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.079 04:53:47 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.017 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.017 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.017 04:53:48 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:34.017 04:53:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.017 04:53:48 -- common/autotest_common.sh@10 -- # set +x 00:04:34.017 04:53:48 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:34.017 04:53:48 -- common/autotest_common.sh@1589 -- # mapfile -t bdfs 00:04:34.017 04:53:48 -- common/autotest_common.sh@1589 -- # get_nvme_bdfs_by_id 0x0a54 00:04:34.017 04:53:48 -- common/autotest_common.sh@1575 -- # bdfs=() 00:04:34.017 04:53:48 -- common/autotest_common.sh@1575 -- # local bdfs 00:04:34.017 04:53:48 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs 00:04:34.017 04:53:48 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:34.017 04:53:48 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:34.017 04:53:48 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.017 04:53:48 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:34.017 04:53:48 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:34.276 04:53:48 -- common/autotest_common.sh@1513 -- # (( 2 == 0 )) 00:04:34.276 04:53:48 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:34.276 04:53:48 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:04:34.276 04:53:48 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:34.276 04:53:48 -- common/autotest_common.sh@1578 -- # device=0x0010 00:04:34.276 04:53:48 -- common/autotest_common.sh@1579 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:34.276 04:53:48 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:04:34.276 04:53:48 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:34.276 04:53:48 -- common/autotest_common.sh@1578 -- # device=0x0010 00:04:34.276 04:53:48 -- common/autotest_common.sh@1579 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:34.276 04:53:48 -- common/autotest_common.sh@1584 -- # printf '%s\n' 00:04:34.276 04:53:48 -- common/autotest_common.sh@1590 -- # [[ -z '' ]] 00:04:34.276 04:53:48 -- common/autotest_common.sh@1591 -- # return 0 00:04:34.276 04:53:48 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:34.276 04:53:48 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:34.276 04:53:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:34.276 04:53:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:34.276 04:53:48 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:34.276 04:53:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.276 04:53:48 -- common/autotest_common.sh@10 -- # set +x 00:04:34.276 04:53:48 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:34.276 04:53:48 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:34.276 04:53:48 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:34.276 04:53:48 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:34.276 04:53:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.276 04:53:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.276 04:53:48 -- common/autotest_common.sh@10 -- # set +x 00:04:34.276 ************************************ 00:04:34.276 START TEST env 00:04:34.276 ************************************ 00:04:34.276 04:53:48 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:34.276 * Looking for test storage... 00:04:34.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:34.276 04:53:48 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:34.276 04:53:48 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.276 04:53:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.276 04:53:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.276 ************************************ 00:04:34.276 START TEST env_memory 00:04:34.276 ************************************ 00:04:34.276 04:53:48 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:34.276 00:04:34.276 00:04:34.276 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.276 http://cunit.sourceforge.net/ 00:04:34.276 00:04:34.276 00:04:34.276 Suite: memory 00:04:34.535 Test: alloc and free memory map ...[2024-07-24 04:53:48.915663] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:34.535 passed 00:04:34.535 Test: mem map translation ...[2024-07-24 04:53:48.983451] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:34.535 [2024-07-24 04:53:48.983526] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:34.535 [2024-07-24 04:53:48.983640] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:34.535 [2024-07-24 04:53:48.983679] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:34.535 passed 00:04:34.535 Test: mem map registration ...[2024-07-24 04:53:49.090185] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:34.535 [2024-07-24 04:53:49.090253] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:34.535 passed 00:04:34.795 Test: mem map adjacent registrations ...passed 00:04:34.795 00:04:34.795 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.795 suites 1 1 n/a 0 0 00:04:34.795 tests 4 4 4 0 0 00:04:34.795 asserts 152 152 152 0 n/a 00:04:34.795 00:04:34.795 Elapsed time = 0.376 seconds 00:04:34.795 00:04:34.795 real 0m0.425s 00:04:34.795 user 0m0.380s 00:04:34.795 sys 0m0.039s 00:04:34.795 04:53:49 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.795 04:53:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:34.795 ************************************ 00:04:34.795 END TEST env_memory 00:04:34.795 ************************************ 00:04:34.795 04:53:49 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:34.795 04:53:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.795 04:53:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.795 04:53:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.795 ************************************ 00:04:34.795 START TEST env_vtophys 00:04:34.795 ************************************ 00:04:34.795 04:53:49 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:34.795 EAL: lib.eal log level changed from notice to debug 00:04:34.795 EAL: Detected lcore 0 as core 0 on socket 0 00:04:34.795 EAL: Detected lcore 1 as core 0 on socket 0 00:04:34.795 EAL: Detected lcore 2 as core 0 on socket 0 00:04:34.795 EAL: Detected lcore 3 as core 0 on socket 0 00:04:34.795 EAL: Detected lcore 4 as core 0 on socket 0 00:04:34.795 EAL: Detected lcore 5 as core 0 on socket 0 00:04:34.795 EAL: Detected lcore 6 as core 0 on socket 0 00:04:34.795 EAL: Detected lcore 7 as core 0 on socket 0 00:04:34.795 EAL: Detected lcore 8 as core 0 on socket 0 00:04:34.795 EAL: Detected lcore 9 as core 0 on socket 0 00:04:34.795 EAL: Maximum logical cores by configuration: 128 00:04:34.795 EAL: Detected CPU lcores: 10 00:04:34.795 EAL: Detected NUMA nodes: 1 00:04:34.795 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:34.795 EAL: Detected shared linkage of DPDK 00:04:35.057 EAL: No shared files mode enabled, IPC will be disabled 00:04:35.057 EAL: Selected IOVA mode 'PA' 00:04:35.057 EAL: Probing VFIO support... 00:04:35.057 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:35.058 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:35.058 EAL: Ask a virtual area of 0x2e000 bytes 00:04:35.058 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:35.058 EAL: Setting up physically contiguous memory... 00:04:35.058 EAL: Setting maximum number of open files to 524288 00:04:35.058 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:35.058 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:35.058 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.058 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:35.058 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.058 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.058 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:35.058 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:35.058 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.058 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:35.058 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.058 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.058 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:35.058 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:35.058 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.058 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:35.058 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.058 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.058 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:35.058 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:35.058 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.058 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:35.058 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.058 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.058 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:35.058 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:35.058 EAL: Hugepages will be freed exactly as allocated. 00:04:35.058 EAL: No shared files mode enabled, IPC is disabled 00:04:35.058 EAL: No shared files mode enabled, IPC is disabled 00:04:35.058 EAL: TSC frequency is ~2100000 KHz 00:04:35.058 EAL: Main lcore 0 is ready (tid=7fd9b26f3a40;cpuset=[0]) 00:04:35.058 EAL: Trying to obtain current memory policy. 00:04:35.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.058 EAL: Restoring previous memory policy: 0 00:04:35.058 EAL: request: mp_malloc_sync 00:04:35.058 EAL: No shared files mode enabled, IPC is disabled 00:04:35.058 EAL: Heap on socket 0 was expanded by 2MB 00:04:35.058 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:35.058 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:35.058 EAL: Mem event callback 'spdk:(nil)' registered 00:04:35.058 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:35.058 00:04:35.058 00:04:35.058 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.058 http://cunit.sourceforge.net/ 00:04:35.058 00:04:35.058 00:04:35.058 Suite: components_suite 00:04:35.625 Test: vtophys_malloc_test ...passed 00:04:35.625 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:35.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.625 EAL: Restoring previous memory policy: 4 00:04:35.625 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.625 EAL: request: mp_malloc_sync 00:04:35.625 EAL: No shared files mode enabled, IPC is disabled 00:04:35.625 EAL: Heap on socket 0 was expanded by 4MB 00:04:35.625 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.625 EAL: request: mp_malloc_sync 00:04:35.625 EAL: No shared files mode enabled, IPC is disabled 00:04:35.625 EAL: Heap on socket 0 was shrunk by 4MB 00:04:35.625 EAL: Trying to obtain current memory policy. 00:04:35.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.625 EAL: Restoring previous memory policy: 4 00:04:35.625 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.625 EAL: request: mp_malloc_sync 00:04:35.625 EAL: No shared files mode enabled, IPC is disabled 00:04:35.625 EAL: Heap on socket 0 was expanded by 6MB 00:04:35.625 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.625 EAL: request: mp_malloc_sync 00:04:35.625 EAL: No shared files mode enabled, IPC is disabled 00:04:35.625 EAL: Heap on socket 0 was shrunk by 6MB 00:04:35.625 EAL: Trying to obtain current memory policy. 00:04:35.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.625 EAL: Restoring previous memory policy: 4 00:04:35.625 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.625 EAL: request: mp_malloc_sync 00:04:35.625 EAL: No shared files mode enabled, IPC is disabled 00:04:35.625 EAL: Heap on socket 0 was expanded by 10MB 00:04:35.625 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.625 EAL: request: mp_malloc_sync 00:04:35.625 EAL: No shared files mode enabled, IPC is disabled 00:04:35.625 EAL: Heap on socket 0 was shrunk by 10MB 00:04:35.625 EAL: Trying to obtain current memory policy. 00:04:35.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.625 EAL: Restoring previous memory policy: 4 00:04:35.625 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.625 EAL: request: mp_malloc_sync 00:04:35.625 EAL: No shared files mode enabled, IPC is disabled 00:04:35.625 EAL: Heap on socket 0 was expanded by 18MB 00:04:35.625 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.625 EAL: request: mp_malloc_sync 00:04:35.625 EAL: No shared files mode enabled, IPC is disabled 00:04:35.625 EAL: Heap on socket 0 was shrunk by 18MB 00:04:35.625 EAL: Trying to obtain current memory policy. 00:04:35.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.625 EAL: Restoring previous memory policy: 4 00:04:35.625 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.625 EAL: request: mp_malloc_sync 00:04:35.625 EAL: No shared files mode enabled, IPC is disabled 00:04:35.625 EAL: Heap on socket 0 was expanded by 34MB 00:04:35.625 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.884 EAL: request: mp_malloc_sync 00:04:35.884 EAL: No shared files mode enabled, IPC is disabled 00:04:35.884 EAL: Heap on socket 0 was shrunk by 34MB 00:04:35.884 EAL: Trying to obtain current memory policy. 00:04:35.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.884 EAL: Restoring previous memory policy: 4 00:04:35.884 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.884 EAL: request: mp_malloc_sync 00:04:35.884 EAL: No shared files mode enabled, IPC is disabled 00:04:35.884 EAL: Heap on socket 0 was expanded by 66MB 00:04:35.884 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.884 EAL: request: mp_malloc_sync 00:04:35.884 EAL: No shared files mode enabled, IPC is disabled 00:04:35.884 EAL: Heap on socket 0 was shrunk by 66MB 00:04:36.143 EAL: Trying to obtain current memory policy. 00:04:36.143 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.143 EAL: Restoring previous memory policy: 4 00:04:36.143 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.143 EAL: request: mp_malloc_sync 00:04:36.143 EAL: No shared files mode enabled, IPC is disabled 00:04:36.143 EAL: Heap on socket 0 was expanded by 130MB 00:04:36.401 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.401 EAL: request: mp_malloc_sync 00:04:36.401 EAL: No shared files mode enabled, IPC is disabled 00:04:36.401 EAL: Heap on socket 0 was shrunk by 130MB 00:04:36.659 EAL: Trying to obtain current memory policy. 00:04:36.659 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.659 EAL: Restoring previous memory policy: 4 00:04:36.659 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.659 EAL: request: mp_malloc_sync 00:04:36.659 EAL: No shared files mode enabled, IPC is disabled 00:04:36.659 EAL: Heap on socket 0 was expanded by 258MB 00:04:37.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.225 EAL: request: mp_malloc_sync 00:04:37.225 EAL: No shared files mode enabled, IPC is disabled 00:04:37.225 EAL: Heap on socket 0 was shrunk by 258MB 00:04:37.484 EAL: Trying to obtain current memory policy. 00:04:37.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.742 EAL: Restoring previous memory policy: 4 00:04:37.742 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.742 EAL: request: mp_malloc_sync 00:04:37.742 EAL: No shared files mode enabled, IPC is disabled 00:04:37.742 EAL: Heap on socket 0 was expanded by 514MB 00:04:38.676 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.933 EAL: request: mp_malloc_sync 00:04:38.933 EAL: No shared files mode enabled, IPC is disabled 00:04:38.933 EAL: Heap on socket 0 was shrunk by 514MB 00:04:39.500 EAL: Trying to obtain current memory policy. 00:04:39.500 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.759 EAL: Restoring previous memory policy: 4 00:04:39.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.759 EAL: request: mp_malloc_sync 00:04:39.759 EAL: No shared files mode enabled, IPC is disabled 00:04:39.759 EAL: Heap on socket 0 was expanded by 1026MB 00:04:42.291 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.291 EAL: request: mp_malloc_sync 00:04:42.291 EAL: No shared files mode enabled, IPC is disabled 00:04:42.291 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:43.668 passed 00:04:43.668 00:04:43.668 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.668 suites 1 1 n/a 0 0 00:04:43.668 tests 2 2 2 0 0 00:04:43.668 asserts 5439 5439 5439 0 n/a 00:04:43.668 00:04:43.668 Elapsed time = 8.510 seconds 00:04:43.668 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.668 EAL: request: mp_malloc_sync 00:04:43.668 EAL: No shared files mode enabled, IPC is disabled 00:04:43.668 EAL: Heap on socket 0 was shrunk by 2MB 00:04:43.668 EAL: No shared files mode enabled, IPC is disabled 00:04:43.668 EAL: No shared files mode enabled, IPC is disabled 00:04:43.668 EAL: No shared files mode enabled, IPC is disabled 00:04:43.668 00:04:43.668 real 0m8.855s 00:04:43.668 user 0m7.791s 00:04:43.668 sys 0m0.910s 00:04:43.668 04:53:58 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.668 04:53:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:43.668 ************************************ 00:04:43.668 END TEST env_vtophys 00:04:43.668 ************************************ 00:04:43.668 04:53:58 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:43.668 04:53:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.668 04:53:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.668 04:53:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.668 ************************************ 00:04:43.668 START TEST env_pci 00:04:43.668 ************************************ 00:04:43.668 04:53:58 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:43.668 00:04:43.668 00:04:43.668 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.668 http://cunit.sourceforge.net/ 00:04:43.668 00:04:43.668 00:04:43.668 Suite: pci 00:04:43.668 Test: pci_hook ...[2024-07-24 04:53:58.286759] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58829 has claimed it 00:04:43.927 passed 00:04:43.927 00:04:43.927 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.928 suites 1 1 n/a 0 0 00:04:43.928 tests 1 1 1 0 0 00:04:43.928 asserts 25 25 25 0 n/a 00:04:43.928 00:04:43.928 Elapsed time = 0.009 seconds 00:04:43.928 EAL: Cannot find device (10000:00:01.0) 00:04:43.928 EAL: Failed to attach device on primary process 00:04:43.928 ************************************ 00:04:43.928 END TEST env_pci 00:04:43.928 ************************************ 00:04:43.928 00:04:43.928 real 0m0.100s 00:04:43.928 user 0m0.045s 00:04:43.928 sys 0m0.053s 00:04:43.928 04:53:58 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.928 04:53:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:43.928 04:53:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:43.928 04:53:58 env -- env/env.sh@15 -- # uname 00:04:43.928 04:53:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:43.928 04:53:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:43.928 04:53:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.928 04:53:58 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:43.928 04:53:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.928 04:53:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.928 ************************************ 00:04:43.928 START TEST env_dpdk_post_init 00:04:43.928 ************************************ 00:04:43.928 04:53:58 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.928 EAL: Detected CPU lcores: 10 00:04:43.928 EAL: Detected NUMA nodes: 1 00:04:43.928 EAL: Detected shared linkage of DPDK 00:04:43.928 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.928 EAL: Selected IOVA mode 'PA' 00:04:44.187 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.187 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:44.187 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:44.187 Starting DPDK initialization... 00:04:44.187 Starting SPDK post initialization... 00:04:44.187 SPDK NVMe probe 00:04:44.187 Attaching to 0000:00:10.0 00:04:44.187 Attaching to 0000:00:11.0 00:04:44.187 Attached to 0000:00:10.0 00:04:44.187 Attached to 0000:00:11.0 00:04:44.187 Cleaning up... 00:04:44.187 00:04:44.187 real 0m0.309s 00:04:44.187 user 0m0.099s 00:04:44.187 sys 0m0.109s 00:04:44.187 04:53:58 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.187 04:53:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.187 ************************************ 00:04:44.187 END TEST env_dpdk_post_init 00:04:44.187 ************************************ 00:04:44.187 04:53:58 env -- env/env.sh@26 -- # uname 00:04:44.187 04:53:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:44.187 04:53:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.187 04:53:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.187 04:53:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.187 04:53:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.187 ************************************ 00:04:44.187 START TEST env_mem_callbacks 00:04:44.187 ************************************ 00:04:44.187 04:53:58 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.446 EAL: Detected CPU lcores: 10 00:04:44.446 EAL: Detected NUMA nodes: 1 00:04:44.446 EAL: Detected shared linkage of DPDK 00:04:44.446 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.446 EAL: Selected IOVA mode 'PA' 00:04:44.446 00:04:44.446 00:04:44.446 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.446 http://cunit.sourceforge.net/ 00:04:44.446 00:04:44.446 00:04:44.446 Suite: memory 00:04:44.446 Test: test ... 00:04:44.446 register 0x200000200000 2097152 00:04:44.446 malloc 3145728 00:04:44.446 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.446 register 0x200000400000 4194304 00:04:44.446 buf 0x2000004fffc0 len 3145728 PASSED 00:04:44.446 malloc 64 00:04:44.446 buf 0x2000004ffec0 len 64 PASSED 00:04:44.446 malloc 4194304 00:04:44.446 register 0x200000800000 6291456 00:04:44.446 buf 0x2000009fffc0 len 4194304 PASSED 00:04:44.446 free 0x2000004fffc0 3145728 00:04:44.446 free 0x2000004ffec0 64 00:04:44.446 unregister 0x200000400000 4194304 PASSED 00:04:44.446 free 0x2000009fffc0 4194304 00:04:44.446 unregister 0x200000800000 6291456 PASSED 00:04:44.446 malloc 8388608 00:04:44.446 register 0x200000400000 10485760 00:04:44.446 buf 0x2000005fffc0 len 8388608 PASSED 00:04:44.446 free 0x2000005fffc0 8388608 00:04:44.446 unregister 0x200000400000 10485760 PASSED 00:04:44.446 passed 00:04:44.446 00:04:44.446 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.446 suites 1 1 n/a 0 0 00:04:44.446 tests 1 1 1 0 0 00:04:44.446 asserts 15 15 15 0 n/a 00:04:44.446 00:04:44.446 Elapsed time = 0.074 seconds 00:04:44.706 00:04:44.706 real 0m0.294s 00:04:44.706 user 0m0.119s 00:04:44.706 sys 0m0.073s 00:04:44.706 04:53:59 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.706 04:53:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:44.706 ************************************ 00:04:44.706 END TEST env_mem_callbacks 00:04:44.706 ************************************ 00:04:44.706 00:04:44.706 real 0m10.409s 00:04:44.706 user 0m8.557s 00:04:44.706 sys 0m1.474s 00:04:44.706 04:53:59 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.706 ************************************ 00:04:44.706 END TEST env 00:04:44.706 ************************************ 00:04:44.706 04:53:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.706 04:53:59 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:44.706 04:53:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.706 04:53:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.706 04:53:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.706 ************************************ 00:04:44.706 START TEST rpc 00:04:44.706 ************************************ 00:04:44.706 04:53:59 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:44.706 * Looking for test storage... 00:04:44.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:44.706 04:53:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58948 00:04:44.706 04:53:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.706 04:53:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58948 00:04:44.706 04:53:59 rpc -- common/autotest_common.sh@829 -- # '[' -z 58948 ']' 00:04:44.706 04:53:59 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.706 04:53:59 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:44.706 04:53:59 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.706 04:53:59 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.706 04:53:59 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.706 04:53:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.978 [2024-07-24 04:53:59.436868] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:04:44.978 [2024-07-24 04:53:59.437056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58948 ] 00:04:45.290 [2024-07-24 04:53:59.624582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.290 [2024-07-24 04:53:59.839169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:45.290 [2024-07-24 04:53:59.839228] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58948' to capture a snapshot of events at runtime. 00:04:45.290 [2024-07-24 04:53:59.839252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:45.290 [2024-07-24 04:53:59.839263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:45.290 [2024-07-24 04:53:59.839279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58948 for offline analysis/debug. 00:04:45.290 [2024-07-24 04:53:59.839314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.549 [2024-07-24 04:54:00.070402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:46.116 04:54:00 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.116 04:54:00 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:46.116 04:54:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.116 04:54:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.116 04:54:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:46.116 04:54:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:46.116 04:54:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.116 04:54:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.116 04:54:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.116 ************************************ 00:04:46.116 START TEST rpc_integrity 00:04:46.116 ************************************ 00:04:46.116 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:46.116 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:46.116 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.116 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:46.375 { 00:04:46.375 "name": "Malloc0", 00:04:46.375 "aliases": [ 00:04:46.375 "969f1fb8-367b-49de-aab8-b17f7843527c" 00:04:46.375 ], 00:04:46.375 "product_name": "Malloc disk", 00:04:46.375 "block_size": 512, 00:04:46.375 "num_blocks": 16384, 00:04:46.375 "uuid": "969f1fb8-367b-49de-aab8-b17f7843527c", 00:04:46.375 "assigned_rate_limits": { 00:04:46.375 "rw_ios_per_sec": 0, 00:04:46.375 "rw_mbytes_per_sec": 0, 00:04:46.375 "r_mbytes_per_sec": 0, 00:04:46.375 "w_mbytes_per_sec": 0 00:04:46.375 }, 00:04:46.375 "claimed": false, 00:04:46.375 "zoned": false, 00:04:46.375 "supported_io_types": { 00:04:46.375 "read": true, 00:04:46.375 "write": true, 00:04:46.375 "unmap": true, 00:04:46.375 "flush": true, 00:04:46.375 "reset": true, 00:04:46.375 "nvme_admin": false, 00:04:46.375 "nvme_io": false, 00:04:46.375 "nvme_io_md": false, 00:04:46.375 "write_zeroes": true, 00:04:46.375 "zcopy": true, 00:04:46.375 "get_zone_info": false, 00:04:46.375 "zone_management": false, 00:04:46.375 "zone_append": false, 00:04:46.375 "compare": false, 00:04:46.375 "compare_and_write": false, 00:04:46.375 "abort": true, 00:04:46.375 "seek_hole": false, 00:04:46.375 "seek_data": false, 00:04:46.375 "copy": true, 00:04:46.375 "nvme_iov_md": false 00:04:46.375 }, 00:04:46.375 "memory_domains": [ 00:04:46.375 { 00:04:46.375 "dma_device_id": "system", 00:04:46.375 "dma_device_type": 1 00:04:46.375 }, 00:04:46.375 { 00:04:46.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.375 "dma_device_type": 2 00:04:46.375 } 00:04:46.375 ], 00:04:46.375 "driver_specific": {} 00:04:46.375 } 00:04:46.375 ]' 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.375 [2024-07-24 04:54:00.889363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:46.375 [2024-07-24 04:54:00.889428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:46.375 [2024-07-24 04:54:00.889458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:46.375 [2024-07-24 04:54:00.889472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:46.375 [2024-07-24 04:54:00.891859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:46.375 [2024-07-24 04:54:00.891903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:46.375 Passthru0 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:46.375 { 00:04:46.375 "name": "Malloc0", 00:04:46.375 "aliases": [ 00:04:46.375 "969f1fb8-367b-49de-aab8-b17f7843527c" 00:04:46.375 ], 00:04:46.375 "product_name": "Malloc disk", 00:04:46.375 "block_size": 512, 00:04:46.375 "num_blocks": 16384, 00:04:46.375 "uuid": "969f1fb8-367b-49de-aab8-b17f7843527c", 00:04:46.375 "assigned_rate_limits": { 00:04:46.375 "rw_ios_per_sec": 0, 00:04:46.375 "rw_mbytes_per_sec": 0, 00:04:46.375 "r_mbytes_per_sec": 0, 00:04:46.375 "w_mbytes_per_sec": 0 00:04:46.375 }, 00:04:46.375 "claimed": true, 00:04:46.375 "claim_type": "exclusive_write", 00:04:46.375 "zoned": false, 00:04:46.375 "supported_io_types": { 00:04:46.375 "read": true, 00:04:46.375 "write": true, 00:04:46.375 "unmap": true, 00:04:46.375 "flush": true, 00:04:46.375 "reset": true, 00:04:46.375 "nvme_admin": false, 00:04:46.375 "nvme_io": false, 00:04:46.375 "nvme_io_md": false, 00:04:46.375 "write_zeroes": true, 00:04:46.375 "zcopy": true, 00:04:46.375 "get_zone_info": false, 00:04:46.375 "zone_management": false, 00:04:46.375 "zone_append": false, 00:04:46.375 "compare": false, 00:04:46.375 "compare_and_write": false, 00:04:46.375 "abort": true, 00:04:46.375 "seek_hole": false, 00:04:46.375 "seek_data": false, 00:04:46.375 "copy": true, 00:04:46.375 "nvme_iov_md": false 00:04:46.375 }, 00:04:46.375 "memory_domains": [ 00:04:46.375 { 00:04:46.375 "dma_device_id": "system", 00:04:46.375 "dma_device_type": 1 00:04:46.375 }, 00:04:46.375 { 00:04:46.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.375 "dma_device_type": 2 00:04:46.375 } 00:04:46.375 ], 00:04:46.375 "driver_specific": {} 00:04:46.375 }, 00:04:46.375 { 00:04:46.375 "name": "Passthru0", 00:04:46.375 "aliases": [ 00:04:46.375 "52478f87-5cc5-5942-8295-6b33239a28b0" 00:04:46.375 ], 00:04:46.375 "product_name": "passthru", 00:04:46.375 "block_size": 512, 00:04:46.375 "num_blocks": 16384, 00:04:46.375 "uuid": "52478f87-5cc5-5942-8295-6b33239a28b0", 00:04:46.375 "assigned_rate_limits": { 00:04:46.375 "rw_ios_per_sec": 0, 00:04:46.375 "rw_mbytes_per_sec": 0, 00:04:46.375 "r_mbytes_per_sec": 0, 00:04:46.375 "w_mbytes_per_sec": 0 00:04:46.375 }, 00:04:46.375 "claimed": false, 00:04:46.375 "zoned": false, 00:04:46.375 "supported_io_types": { 00:04:46.375 "read": true, 00:04:46.375 "write": true, 00:04:46.375 "unmap": true, 00:04:46.375 "flush": true, 00:04:46.375 "reset": true, 00:04:46.375 "nvme_admin": false, 00:04:46.375 "nvme_io": false, 00:04:46.375 "nvme_io_md": false, 00:04:46.375 "write_zeroes": true, 00:04:46.375 "zcopy": true, 00:04:46.375 "get_zone_info": false, 00:04:46.375 "zone_management": false, 00:04:46.375 "zone_append": false, 00:04:46.375 "compare": false, 00:04:46.375 "compare_and_write": false, 00:04:46.375 "abort": true, 00:04:46.375 "seek_hole": false, 00:04:46.375 "seek_data": false, 00:04:46.375 "copy": true, 00:04:46.375 "nvme_iov_md": false 00:04:46.375 }, 00:04:46.375 "memory_domains": [ 00:04:46.375 { 00:04:46.375 "dma_device_id": "system", 00:04:46.375 "dma_device_type": 1 00:04:46.375 }, 00:04:46.375 { 00:04:46.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.375 "dma_device_type": 2 00:04:46.375 } 00:04:46.375 ], 00:04:46.375 "driver_specific": { 00:04:46.375 "passthru": { 00:04:46.375 "name": "Passthru0", 00:04:46.375 "base_bdev_name": "Malloc0" 00:04:46.375 } 00:04:46.375 } 00:04:46.375 } 00:04:46.375 ]' 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.375 04:54:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.375 04:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.634 04:54:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.634 04:54:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.634 04:54:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.634 ************************************ 00:04:46.634 END TEST rpc_integrity 00:04:46.634 ************************************ 00:04:46.634 04:54:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.634 00:04:46.634 real 0m0.306s 00:04:46.634 user 0m0.154s 00:04:46.634 sys 0m0.053s 00:04:46.634 04:54:01 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.634 04:54:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.634 04:54:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:46.634 04:54:01 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.634 04:54:01 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.634 04:54:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.634 ************************************ 00:04:46.634 START TEST rpc_plugins 00:04:46.634 ************************************ 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:46.634 04:54:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.634 04:54:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:46.634 04:54:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.634 04:54:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:46.634 { 00:04:46.634 "name": "Malloc1", 00:04:46.634 "aliases": [ 00:04:46.634 "f214870f-e47f-447b-a9d4-bf26db3baadb" 00:04:46.634 ], 00:04:46.634 "product_name": "Malloc disk", 00:04:46.634 "block_size": 4096, 00:04:46.634 "num_blocks": 256, 00:04:46.634 "uuid": "f214870f-e47f-447b-a9d4-bf26db3baadb", 00:04:46.634 "assigned_rate_limits": { 00:04:46.634 "rw_ios_per_sec": 0, 00:04:46.634 "rw_mbytes_per_sec": 0, 00:04:46.634 "r_mbytes_per_sec": 0, 00:04:46.634 "w_mbytes_per_sec": 0 00:04:46.634 }, 00:04:46.634 "claimed": false, 00:04:46.634 "zoned": false, 00:04:46.634 "supported_io_types": { 00:04:46.634 "read": true, 00:04:46.634 "write": true, 00:04:46.634 "unmap": true, 00:04:46.634 "flush": true, 00:04:46.634 "reset": true, 00:04:46.634 "nvme_admin": false, 00:04:46.634 "nvme_io": false, 00:04:46.634 "nvme_io_md": false, 00:04:46.634 "write_zeroes": true, 00:04:46.634 "zcopy": true, 00:04:46.634 "get_zone_info": false, 00:04:46.634 "zone_management": false, 00:04:46.634 "zone_append": false, 00:04:46.634 "compare": false, 00:04:46.634 "compare_and_write": false, 00:04:46.634 "abort": true, 00:04:46.634 "seek_hole": false, 00:04:46.634 "seek_data": false, 00:04:46.634 "copy": true, 00:04:46.634 "nvme_iov_md": false 00:04:46.634 }, 00:04:46.634 "memory_domains": [ 00:04:46.634 { 00:04:46.634 "dma_device_id": "system", 00:04:46.634 "dma_device_type": 1 00:04:46.634 }, 00:04:46.634 { 00:04:46.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.634 "dma_device_type": 2 00:04:46.634 } 00:04:46.634 ], 00:04:46.634 "driver_specific": {} 00:04:46.634 } 00:04:46.634 ]' 00:04:46.634 04:54:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:46.634 04:54:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:46.634 04:54:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.634 04:54:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.634 04:54:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:46.634 04:54:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:46.634 ************************************ 00:04:46.634 END TEST rpc_plugins 00:04:46.634 ************************************ 00:04:46.634 04:54:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:46.634 00:04:46.634 real 0m0.141s 00:04:46.634 user 0m0.076s 00:04:46.634 sys 0m0.028s 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.634 04:54:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.893 04:54:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:46.893 04:54:01 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.893 04:54:01 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.893 04:54:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.893 ************************************ 00:04:46.893 START TEST rpc_trace_cmd_test 00:04:46.893 ************************************ 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:46.893 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58948", 00:04:46.893 "tpoint_group_mask": "0x8", 00:04:46.893 "iscsi_conn": { 00:04:46.893 "mask": "0x2", 00:04:46.893 "tpoint_mask": "0x0" 00:04:46.893 }, 00:04:46.893 "scsi": { 00:04:46.893 "mask": "0x4", 00:04:46.893 "tpoint_mask": "0x0" 00:04:46.893 }, 00:04:46.893 "bdev": { 00:04:46.893 "mask": "0x8", 00:04:46.893 "tpoint_mask": "0xffffffffffffffff" 00:04:46.893 }, 00:04:46.893 "nvmf_rdma": { 00:04:46.893 "mask": "0x10", 00:04:46.893 "tpoint_mask": "0x0" 00:04:46.893 }, 00:04:46.893 "nvmf_tcp": { 00:04:46.893 "mask": "0x20", 00:04:46.893 "tpoint_mask": "0x0" 00:04:46.893 }, 00:04:46.893 "ftl": { 00:04:46.893 "mask": "0x40", 00:04:46.893 "tpoint_mask": "0x0" 00:04:46.893 }, 00:04:46.893 "blobfs": { 00:04:46.893 "mask": "0x80", 00:04:46.893 "tpoint_mask": "0x0" 00:04:46.893 }, 00:04:46.893 "dsa": { 00:04:46.893 "mask": "0x200", 00:04:46.893 "tpoint_mask": "0x0" 00:04:46.893 }, 00:04:46.893 "thread": { 00:04:46.893 "mask": "0x400", 00:04:46.893 "tpoint_mask": "0x0" 00:04:46.893 }, 00:04:46.893 "nvme_pcie": { 00:04:46.893 "mask": "0x800", 00:04:46.893 "tpoint_mask": "0x0" 00:04:46.893 }, 00:04:46.893 "iaa": { 00:04:46.893 "mask": "0x1000", 00:04:46.893 "tpoint_mask": "0x0" 00:04:46.893 }, 00:04:46.893 "nvme_tcp": { 00:04:46.893 "mask": "0x2000", 00:04:46.893 "tpoint_mask": "0x0" 00:04:46.893 }, 00:04:46.893 "bdev_nvme": { 00:04:46.893 "mask": "0x4000", 00:04:46.893 "tpoint_mask": "0x0" 00:04:46.893 }, 00:04:46.893 "sock": { 00:04:46.893 "mask": "0x8000", 00:04:46.893 "tpoint_mask": "0x0" 00:04:46.893 } 00:04:46.893 }' 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:46.893 04:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:47.152 ************************************ 00:04:47.152 END TEST rpc_trace_cmd_test 00:04:47.152 ************************************ 00:04:47.152 04:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:47.152 00:04:47.152 real 0m0.244s 00:04:47.152 user 0m0.194s 00:04:47.152 sys 0m0.040s 00:04:47.152 04:54:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.152 04:54:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:47.152 04:54:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:47.152 04:54:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:47.152 04:54:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:47.153 04:54:01 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.153 04:54:01 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.153 04:54:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.153 ************************************ 00:04:47.153 START TEST rpc_daemon_integrity 00:04:47.153 ************************************ 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:47.153 { 00:04:47.153 "name": "Malloc2", 00:04:47.153 "aliases": [ 00:04:47.153 "28fae224-ad52-4f7a-b07b-9d23d99e7354" 00:04:47.153 ], 00:04:47.153 "product_name": "Malloc disk", 00:04:47.153 "block_size": 512, 00:04:47.153 "num_blocks": 16384, 00:04:47.153 "uuid": "28fae224-ad52-4f7a-b07b-9d23d99e7354", 00:04:47.153 "assigned_rate_limits": { 00:04:47.153 "rw_ios_per_sec": 0, 00:04:47.153 "rw_mbytes_per_sec": 0, 00:04:47.153 "r_mbytes_per_sec": 0, 00:04:47.153 "w_mbytes_per_sec": 0 00:04:47.153 }, 00:04:47.153 "claimed": false, 00:04:47.153 "zoned": false, 00:04:47.153 "supported_io_types": { 00:04:47.153 "read": true, 00:04:47.153 "write": true, 00:04:47.153 "unmap": true, 00:04:47.153 "flush": true, 00:04:47.153 "reset": true, 00:04:47.153 "nvme_admin": false, 00:04:47.153 "nvme_io": false, 00:04:47.153 "nvme_io_md": false, 00:04:47.153 "write_zeroes": true, 00:04:47.153 "zcopy": true, 00:04:47.153 "get_zone_info": false, 00:04:47.153 "zone_management": false, 00:04:47.153 "zone_append": false, 00:04:47.153 "compare": false, 00:04:47.153 "compare_and_write": false, 00:04:47.153 "abort": true, 00:04:47.153 "seek_hole": false, 00:04:47.153 "seek_data": false, 00:04:47.153 "copy": true, 00:04:47.153 "nvme_iov_md": false 00:04:47.153 }, 00:04:47.153 "memory_domains": [ 00:04:47.153 { 00:04:47.153 "dma_device_id": "system", 00:04:47.153 "dma_device_type": 1 00:04:47.153 }, 00:04:47.153 { 00:04:47.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.153 "dma_device_type": 2 00:04:47.153 } 00:04:47.153 ], 00:04:47.153 "driver_specific": {} 00:04:47.153 } 00:04:47.153 ]' 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.153 [2024-07-24 04:54:01.740382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:47.153 [2024-07-24 04:54:01.740439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:47.153 [2024-07-24 04:54:01.740462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:47.153 [2024-07-24 04:54:01.740476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:47.153 [2024-07-24 04:54:01.742834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:47.153 [2024-07-24 04:54:01.742876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:47.153 Passthru0 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:47.153 { 00:04:47.153 "name": "Malloc2", 00:04:47.153 "aliases": [ 00:04:47.153 "28fae224-ad52-4f7a-b07b-9d23d99e7354" 00:04:47.153 ], 00:04:47.153 "product_name": "Malloc disk", 00:04:47.153 "block_size": 512, 00:04:47.153 "num_blocks": 16384, 00:04:47.153 "uuid": "28fae224-ad52-4f7a-b07b-9d23d99e7354", 00:04:47.153 "assigned_rate_limits": { 00:04:47.153 "rw_ios_per_sec": 0, 00:04:47.153 "rw_mbytes_per_sec": 0, 00:04:47.153 "r_mbytes_per_sec": 0, 00:04:47.153 "w_mbytes_per_sec": 0 00:04:47.153 }, 00:04:47.153 "claimed": true, 00:04:47.153 "claim_type": "exclusive_write", 00:04:47.153 "zoned": false, 00:04:47.153 "supported_io_types": { 00:04:47.153 "read": true, 00:04:47.153 "write": true, 00:04:47.153 "unmap": true, 00:04:47.153 "flush": true, 00:04:47.153 "reset": true, 00:04:47.153 "nvme_admin": false, 00:04:47.153 "nvme_io": false, 00:04:47.153 "nvme_io_md": false, 00:04:47.153 "write_zeroes": true, 00:04:47.153 "zcopy": true, 00:04:47.153 "get_zone_info": false, 00:04:47.153 "zone_management": false, 00:04:47.153 "zone_append": false, 00:04:47.153 "compare": false, 00:04:47.153 "compare_and_write": false, 00:04:47.153 "abort": true, 00:04:47.153 "seek_hole": false, 00:04:47.153 "seek_data": false, 00:04:47.153 "copy": true, 00:04:47.153 "nvme_iov_md": false 00:04:47.153 }, 00:04:47.153 "memory_domains": [ 00:04:47.153 { 00:04:47.153 "dma_device_id": "system", 00:04:47.153 "dma_device_type": 1 00:04:47.153 }, 00:04:47.153 { 00:04:47.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.153 "dma_device_type": 2 00:04:47.153 } 00:04:47.153 ], 00:04:47.153 "driver_specific": {} 00:04:47.153 }, 00:04:47.153 { 00:04:47.153 "name": "Passthru0", 00:04:47.153 "aliases": [ 00:04:47.153 "19f029cf-a16d-5832-b61f-da5ef45e46cb" 00:04:47.153 ], 00:04:47.153 "product_name": "passthru", 00:04:47.153 "block_size": 512, 00:04:47.153 "num_blocks": 16384, 00:04:47.153 "uuid": "19f029cf-a16d-5832-b61f-da5ef45e46cb", 00:04:47.153 "assigned_rate_limits": { 00:04:47.153 "rw_ios_per_sec": 0, 00:04:47.153 "rw_mbytes_per_sec": 0, 00:04:47.153 "r_mbytes_per_sec": 0, 00:04:47.153 "w_mbytes_per_sec": 0 00:04:47.153 }, 00:04:47.153 "claimed": false, 00:04:47.153 "zoned": false, 00:04:47.153 "supported_io_types": { 00:04:47.153 "read": true, 00:04:47.153 "write": true, 00:04:47.153 "unmap": true, 00:04:47.153 "flush": true, 00:04:47.153 "reset": true, 00:04:47.153 "nvme_admin": false, 00:04:47.153 "nvme_io": false, 00:04:47.153 "nvme_io_md": false, 00:04:47.153 "write_zeroes": true, 00:04:47.153 "zcopy": true, 00:04:47.153 "get_zone_info": false, 00:04:47.153 "zone_management": false, 00:04:47.153 "zone_append": false, 00:04:47.153 "compare": false, 00:04:47.153 "compare_and_write": false, 00:04:47.153 "abort": true, 00:04:47.153 "seek_hole": false, 00:04:47.153 "seek_data": false, 00:04:47.153 "copy": true, 00:04:47.153 "nvme_iov_md": false 00:04:47.153 }, 00:04:47.153 "memory_domains": [ 00:04:47.153 { 00:04:47.153 "dma_device_id": "system", 00:04:47.153 "dma_device_type": 1 00:04:47.153 }, 00:04:47.153 { 00:04:47.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.153 "dma_device_type": 2 00:04:47.153 } 00:04:47.153 ], 00:04:47.153 "driver_specific": { 00:04:47.153 "passthru": { 00:04:47.153 "name": "Passthru0", 00:04:47.153 "base_bdev_name": "Malloc2" 00:04:47.153 } 00:04:47.153 } 00:04:47.153 } 00:04:47.153 ]' 00:04:47.153 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:47.412 ************************************ 00:04:47.412 END TEST rpc_daemon_integrity 00:04:47.412 ************************************ 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:47.412 00:04:47.412 real 0m0.327s 00:04:47.412 user 0m0.181s 00:04:47.412 sys 0m0.051s 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.412 04:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.412 04:54:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:47.412 04:54:01 rpc -- rpc/rpc.sh@84 -- # killprocess 58948 00:04:47.412 04:54:01 rpc -- common/autotest_common.sh@948 -- # '[' -z 58948 ']' 00:04:47.412 04:54:01 rpc -- common/autotest_common.sh@952 -- # kill -0 58948 00:04:47.412 04:54:01 rpc -- common/autotest_common.sh@953 -- # uname 00:04:47.412 04:54:01 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.412 04:54:01 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58948 00:04:47.412 killing process with pid 58948 00:04:47.412 04:54:02 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.412 04:54:02 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.412 04:54:02 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58948' 00:04:47.412 04:54:02 rpc -- common/autotest_common.sh@967 -- # kill 58948 00:04:47.412 04:54:02 rpc -- common/autotest_common.sh@972 -- # wait 58948 00:04:49.946 00:04:49.946 real 0m5.264s 00:04:49.946 user 0m5.723s 00:04:49.946 sys 0m0.943s 00:04:49.946 04:54:04 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.946 04:54:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.946 ************************************ 00:04:49.946 END TEST rpc 00:04:49.946 ************************************ 00:04:49.946 04:54:04 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:49.946 04:54:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.946 04:54:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.946 04:54:04 -- common/autotest_common.sh@10 -- # set +x 00:04:49.946 ************************************ 00:04:49.946 START TEST skip_rpc 00:04:49.946 ************************************ 00:04:49.946 04:54:04 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:50.205 * Looking for test storage... 00:04:50.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:50.205 04:54:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:50.205 04:54:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:50.205 04:54:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:50.205 04:54:04 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.205 04:54:04 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.205 04:54:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.205 ************************************ 00:04:50.205 START TEST skip_rpc 00:04:50.205 ************************************ 00:04:50.205 04:54:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:50.205 04:54:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59169 00:04:50.205 04:54:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:50.205 04:54:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.205 04:54:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:50.205 [2024-07-24 04:54:04.762333] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:04:50.205 [2024-07-24 04:54:04.762500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59169 ] 00:04:50.463 [2024-07-24 04:54:04.947170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.722 [2024-07-24 04:54:05.166446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.980 [2024-07-24 04:54:05.393778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59169 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 59169 ']' 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 59169 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59169 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:55.168 killing process with pid 59169 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59169' 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 59169 00:04:55.168 04:54:09 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 59169 00:04:57.764 00:04:57.764 real 0m7.504s 00:04:57.764 user 0m7.001s 00:04:57.764 sys 0m0.409s 00:04:57.764 ************************************ 00:04:57.764 END TEST skip_rpc 00:04:57.764 ************************************ 00:04:57.764 04:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.764 04:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.764 04:54:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:57.764 04:54:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.764 04:54:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.764 04:54:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.764 ************************************ 00:04:57.764 START TEST skip_rpc_with_json 00:04:57.764 ************************************ 00:04:57.764 04:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:57.764 04:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:57.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.764 04:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59273 00:04:57.764 04:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.764 04:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.764 04:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59273 00:04:57.764 04:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59273 ']' 00:04:57.764 04:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.764 04:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.764 04:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.764 04:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.765 04:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.765 [2024-07-24 04:54:12.326747] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:04:57.765 [2024-07-24 04:54:12.327251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59273 ] 00:04:58.023 [2024-07-24 04:54:12.516833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.282 [2024-07-24 04:54:12.731696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.541 [2024-07-24 04:54:12.964344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:59.109 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.109 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:59.109 04:54:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:59.109 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.109 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.109 [2024-07-24 04:54:13.632637] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:59.109 request: 00:04:59.109 { 00:04:59.109 "trtype": "tcp", 00:04:59.109 "method": "nvmf_get_transports", 00:04:59.109 "req_id": 1 00:04:59.109 } 00:04:59.109 Got JSON-RPC error response 00:04:59.109 response: 00:04:59.109 { 00:04:59.109 "code": -19, 00:04:59.109 "message": "No such device" 00:04:59.109 } 00:04:59.109 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:59.109 04:54:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:59.109 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.109 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.109 [2024-07-24 04:54:13.644774] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.109 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.109 04:54:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:59.109 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.109 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.368 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.368 04:54:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:59.368 { 00:04:59.368 "subsystems": [ 00:04:59.368 { 00:04:59.368 "subsystem": "keyring", 00:04:59.368 "config": [] 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "subsystem": "iobuf", 00:04:59.368 "config": [ 00:04:59.368 { 00:04:59.368 "method": "iobuf_set_options", 00:04:59.368 "params": { 00:04:59.368 "small_pool_count": 8192, 00:04:59.368 "large_pool_count": 1024, 00:04:59.368 "small_bufsize": 8192, 00:04:59.368 "large_bufsize": 135168 00:04:59.368 } 00:04:59.368 } 00:04:59.368 ] 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "subsystem": "sock", 00:04:59.368 "config": [ 00:04:59.368 { 00:04:59.368 "method": "sock_set_default_impl", 00:04:59.368 "params": { 00:04:59.368 "impl_name": "uring" 00:04:59.368 } 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "method": "sock_impl_set_options", 00:04:59.368 "params": { 00:04:59.368 "impl_name": "ssl", 00:04:59.368 "recv_buf_size": 4096, 00:04:59.368 "send_buf_size": 4096, 00:04:59.368 "enable_recv_pipe": true, 00:04:59.368 "enable_quickack": false, 00:04:59.368 "enable_placement_id": 0, 00:04:59.368 "enable_zerocopy_send_server": true, 00:04:59.368 "enable_zerocopy_send_client": false, 00:04:59.368 "zerocopy_threshold": 0, 00:04:59.368 "tls_version": 0, 00:04:59.368 "enable_ktls": false 00:04:59.368 } 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "method": "sock_impl_set_options", 00:04:59.368 "params": { 00:04:59.368 "impl_name": "posix", 00:04:59.368 "recv_buf_size": 2097152, 00:04:59.368 "send_buf_size": 2097152, 00:04:59.368 "enable_recv_pipe": true, 00:04:59.368 "enable_quickack": false, 00:04:59.368 "enable_placement_id": 0, 00:04:59.368 "enable_zerocopy_send_server": true, 00:04:59.368 "enable_zerocopy_send_client": false, 00:04:59.368 "zerocopy_threshold": 0, 00:04:59.368 "tls_version": 0, 00:04:59.368 "enable_ktls": false 00:04:59.368 } 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "method": "sock_impl_set_options", 00:04:59.368 "params": { 00:04:59.368 "impl_name": "uring", 00:04:59.368 "recv_buf_size": 2097152, 00:04:59.368 "send_buf_size": 2097152, 00:04:59.368 "enable_recv_pipe": true, 00:04:59.368 "enable_quickack": false, 00:04:59.368 "enable_placement_id": 0, 00:04:59.368 "enable_zerocopy_send_server": false, 00:04:59.368 "enable_zerocopy_send_client": false, 00:04:59.368 "zerocopy_threshold": 0, 00:04:59.368 "tls_version": 0, 00:04:59.368 "enable_ktls": false 00:04:59.368 } 00:04:59.368 } 00:04:59.368 ] 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "subsystem": "vmd", 00:04:59.368 "config": [] 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "subsystem": "accel", 00:04:59.368 "config": [ 00:04:59.368 { 00:04:59.368 "method": "accel_set_options", 00:04:59.368 "params": { 00:04:59.368 "small_cache_size": 128, 00:04:59.368 "large_cache_size": 16, 00:04:59.368 "task_count": 2048, 00:04:59.368 "sequence_count": 2048, 00:04:59.368 "buf_count": 2048 00:04:59.368 } 00:04:59.368 } 00:04:59.368 ] 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "subsystem": "bdev", 00:04:59.368 "config": [ 00:04:59.368 { 00:04:59.368 "method": "bdev_set_options", 00:04:59.368 "params": { 00:04:59.368 "bdev_io_pool_size": 65535, 00:04:59.368 "bdev_io_cache_size": 256, 00:04:59.368 "bdev_auto_examine": true, 00:04:59.368 "iobuf_small_cache_size": 128, 00:04:59.368 "iobuf_large_cache_size": 16 00:04:59.368 } 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "method": "bdev_raid_set_options", 00:04:59.368 "params": { 00:04:59.368 "process_window_size_kb": 1024, 00:04:59.368 "process_max_bandwidth_mb_sec": 0 00:04:59.368 } 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "method": "bdev_iscsi_set_options", 00:04:59.368 "params": { 00:04:59.368 "timeout_sec": 30 00:04:59.368 } 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "method": "bdev_nvme_set_options", 00:04:59.368 "params": { 00:04:59.368 "action_on_timeout": "none", 00:04:59.368 "timeout_us": 0, 00:04:59.368 "timeout_admin_us": 0, 00:04:59.368 "keep_alive_timeout_ms": 10000, 00:04:59.368 "arbitration_burst": 0, 00:04:59.368 "low_priority_weight": 0, 00:04:59.368 "medium_priority_weight": 0, 00:04:59.368 "high_priority_weight": 0, 00:04:59.368 "nvme_adminq_poll_period_us": 10000, 00:04:59.368 "nvme_ioq_poll_period_us": 0, 00:04:59.368 "io_queue_requests": 0, 00:04:59.368 "delay_cmd_submit": true, 00:04:59.368 "transport_retry_count": 4, 00:04:59.368 "bdev_retry_count": 3, 00:04:59.368 "transport_ack_timeout": 0, 00:04:59.368 "ctrlr_loss_timeout_sec": 0, 00:04:59.368 "reconnect_delay_sec": 0, 00:04:59.368 "fast_io_fail_timeout_sec": 0, 00:04:59.368 "disable_auto_failback": false, 00:04:59.368 "generate_uuids": false, 00:04:59.368 "transport_tos": 0, 00:04:59.368 "nvme_error_stat": false, 00:04:59.368 "rdma_srq_size": 0, 00:04:59.368 "io_path_stat": false, 00:04:59.368 "allow_accel_sequence": false, 00:04:59.368 "rdma_max_cq_size": 0, 00:04:59.368 "rdma_cm_event_timeout_ms": 0, 00:04:59.368 "dhchap_digests": [ 00:04:59.368 "sha256", 00:04:59.368 "sha384", 00:04:59.368 "sha512" 00:04:59.368 ], 00:04:59.368 "dhchap_dhgroups": [ 00:04:59.368 "null", 00:04:59.368 "ffdhe2048", 00:04:59.368 "ffdhe3072", 00:04:59.368 "ffdhe4096", 00:04:59.368 "ffdhe6144", 00:04:59.368 "ffdhe8192" 00:04:59.368 ] 00:04:59.368 } 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "method": "bdev_nvme_set_hotplug", 00:04:59.368 "params": { 00:04:59.368 "period_us": 100000, 00:04:59.368 "enable": false 00:04:59.368 } 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "method": "bdev_wait_for_examine" 00:04:59.368 } 00:04:59.368 ] 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "subsystem": "scsi", 00:04:59.368 "config": null 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "subsystem": "scheduler", 00:04:59.368 "config": [ 00:04:59.368 { 00:04:59.368 "method": "framework_set_scheduler", 00:04:59.368 "params": { 00:04:59.368 "name": "static" 00:04:59.368 } 00:04:59.368 } 00:04:59.368 ] 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "subsystem": "vhost_scsi", 00:04:59.368 "config": [] 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "subsystem": "vhost_blk", 00:04:59.368 "config": [] 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "subsystem": "ublk", 00:04:59.368 "config": [] 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "subsystem": "nbd", 00:04:59.368 "config": [] 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "subsystem": "nvmf", 00:04:59.368 "config": [ 00:04:59.368 { 00:04:59.368 "method": "nvmf_set_config", 00:04:59.368 "params": { 00:04:59.368 "discovery_filter": "match_any", 00:04:59.368 "admin_cmd_passthru": { 00:04:59.368 "identify_ctrlr": false 00:04:59.368 } 00:04:59.368 } 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "method": "nvmf_set_max_subsystems", 00:04:59.368 "params": { 00:04:59.368 "max_subsystems": 1024 00:04:59.368 } 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "method": "nvmf_set_crdt", 00:04:59.368 "params": { 00:04:59.368 "crdt1": 0, 00:04:59.368 "crdt2": 0, 00:04:59.368 "crdt3": 0 00:04:59.368 } 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "method": "nvmf_create_transport", 00:04:59.368 "params": { 00:04:59.368 "trtype": "TCP", 00:04:59.368 "max_queue_depth": 128, 00:04:59.368 "max_io_qpairs_per_ctrlr": 127, 00:04:59.368 "in_capsule_data_size": 4096, 00:04:59.368 "max_io_size": 131072, 00:04:59.368 "io_unit_size": 131072, 00:04:59.368 "max_aq_depth": 128, 00:04:59.368 "num_shared_buffers": 511, 00:04:59.368 "buf_cache_size": 4294967295, 00:04:59.368 "dif_insert_or_strip": false, 00:04:59.368 "zcopy": false, 00:04:59.368 "c2h_success": true, 00:04:59.368 "sock_priority": 0, 00:04:59.368 "abort_timeout_sec": 1, 00:04:59.368 "ack_timeout": 0, 00:04:59.368 "data_wr_pool_size": 0 00:04:59.368 } 00:04:59.368 } 00:04:59.368 ] 00:04:59.368 }, 00:04:59.368 { 00:04:59.368 "subsystem": "iscsi", 00:04:59.368 "config": [ 00:04:59.368 { 00:04:59.368 "method": "iscsi_set_options", 00:04:59.368 "params": { 00:04:59.368 "node_base": "iqn.2016-06.io.spdk", 00:04:59.368 "max_sessions": 128, 00:04:59.368 "max_connections_per_session": 2, 00:04:59.368 "max_queue_depth": 64, 00:04:59.368 "default_time2wait": 2, 00:04:59.368 "default_time2retain": 20, 00:04:59.368 "first_burst_length": 8192, 00:04:59.368 "immediate_data": true, 00:04:59.368 "allow_duplicated_isid": false, 00:04:59.368 "error_recovery_level": 0, 00:04:59.368 "nop_timeout": 60, 00:04:59.368 "nop_in_interval": 30, 00:04:59.368 "disable_chap": false, 00:04:59.368 "require_chap": false, 00:04:59.368 "mutual_chap": false, 00:04:59.368 "chap_group": 0, 00:04:59.368 "max_large_datain_per_connection": 64, 00:04:59.368 "max_r2t_per_connection": 4, 00:04:59.368 "pdu_pool_size": 36864, 00:04:59.368 "immediate_data_pool_size": 16384, 00:04:59.368 "data_out_pool_size": 2048 00:04:59.368 } 00:04:59.368 } 00:04:59.368 ] 00:04:59.368 } 00:04:59.368 ] 00:04:59.368 } 00:04:59.368 04:54:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:59.368 04:54:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59273 00:04:59.368 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59273 ']' 00:04:59.368 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59273 00:04:59.368 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:59.368 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.368 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59273 00:04:59.368 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.368 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.368 killing process with pid 59273 00:04:59.368 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59273' 00:04:59.369 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59273 00:04:59.369 04:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59273 00:05:01.903 04:54:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:01.903 04:54:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59329 00:05:01.903 04:54:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:07.173 04:54:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59329 00:05:07.173 04:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59329 ']' 00:05:07.173 04:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59329 00:05:07.173 04:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:07.173 04:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.173 04:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59329 00:05:07.173 killing process with pid 59329 00:05:07.173 04:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.173 04:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.173 04:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59329' 00:05:07.173 04:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59329 00:05:07.173 04:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59329 00:05:09.706 04:54:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:09.706 04:54:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:09.706 00:05:09.706 real 0m11.598s 00:05:09.706 user 0m10.995s 00:05:09.706 sys 0m0.929s 00:05:09.706 ************************************ 00:05:09.706 END TEST skip_rpc_with_json 00:05:09.706 ************************************ 00:05:09.706 04:54:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.706 04:54:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.706 04:54:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:09.706 04:54:23 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.706 04:54:23 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.706 04:54:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.706 ************************************ 00:05:09.706 START TEST skip_rpc_with_delay 00:05:09.706 ************************************ 00:05:09.706 04:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:09.706 04:54:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.706 04:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:09.706 04:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.706 04:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.706 04:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.706 04:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.707 04:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.707 04:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.707 04:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.707 04:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.707 04:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:09.707 04:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.707 [2024-07-24 04:54:23.993999] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:09.707 [2024-07-24 04:54:23.994179] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:09.707 04:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:09.707 04:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:09.707 04:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:09.707 04:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.707 00:05:09.707 real 0m0.220s 00:05:09.707 user 0m0.110s 00:05:09.707 sys 0m0.108s 00:05:09.707 04:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.707 04:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:09.707 ************************************ 00:05:09.707 END TEST skip_rpc_with_delay 00:05:09.707 ************************************ 00:05:09.707 04:54:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:09.707 04:54:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:09.707 04:54:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:09.707 04:54:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.707 04:54:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.707 04:54:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.707 ************************************ 00:05:09.707 START TEST exit_on_failed_rpc_init 00:05:09.707 ************************************ 00:05:09.707 04:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:09.707 04:54:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59468 00:05:09.707 04:54:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.707 04:54:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59468 00:05:09.707 04:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59468 ']' 00:05:09.707 04:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.707 04:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.707 04:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.707 04:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.707 04:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:09.707 [2024-07-24 04:54:24.282315] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:09.707 [2024-07-24 04:54:24.282754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59468 ] 00:05:09.965 [2024-07-24 04:54:24.464731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.223 [2024-07-24 04:54:24.680850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.481 [2024-07-24 04:54:24.920174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:11.047 04:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.305 [2024-07-24 04:54:25.717936] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:11.305 [2024-07-24 04:54:25.718112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59486 ] 00:05:11.305 [2024-07-24 04:54:25.885320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.563 [2024-07-24 04:54:26.171301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.563 [2024-07-24 04:54:26.171400] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:11.563 [2024-07-24 04:54:26.171422] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:11.563 [2024-07-24 04:54:26.171442] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59468 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59468 ']' 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59468 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59468 00:05:12.131 killing process with pid 59468 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59468' 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59468 00:05:12.131 04:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59468 00:05:14.664 00:05:14.664 real 0m4.928s 00:05:14.664 user 0m5.586s 00:05:14.664 sys 0m0.616s 00:05:14.664 04:54:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.664 ************************************ 00:05:14.664 04:54:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.664 END TEST exit_on_failed_rpc_init 00:05:14.664 ************************************ 00:05:14.664 04:54:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:14.664 00:05:14.664 real 0m24.617s 00:05:14.664 user 0m23.821s 00:05:14.664 sys 0m2.290s 00:05:14.664 04:54:29 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.664 04:54:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.664 ************************************ 00:05:14.664 END TEST skip_rpc 00:05:14.664 ************************************ 00:05:14.664 04:54:29 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:14.664 04:54:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.664 04:54:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.664 04:54:29 -- common/autotest_common.sh@10 -- # set +x 00:05:14.664 ************************************ 00:05:14.664 START TEST rpc_client 00:05:14.664 ************************************ 00:05:14.664 04:54:29 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:14.664 * Looking for test storage... 00:05:14.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:14.664 04:54:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:14.924 OK 00:05:14.924 04:54:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:14.924 00:05:14.924 real 0m0.179s 00:05:14.924 user 0m0.083s 00:05:14.924 sys 0m0.103s 00:05:14.924 04:54:29 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.924 04:54:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:14.924 ************************************ 00:05:14.924 END TEST rpc_client 00:05:14.924 ************************************ 00:05:14.924 04:54:29 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:14.924 04:54:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.924 04:54:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.924 04:54:29 -- common/autotest_common.sh@10 -- # set +x 00:05:14.924 ************************************ 00:05:14.924 START TEST json_config 00:05:14.924 ************************************ 00:05:14.924 04:54:29 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:14.924 04:54:29 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a36f1e81-73a2-4b75-9a56-c42aa4d68100 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a36f1e81-73a2-4b75-9a56-c42aa4d68100 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.924 04:54:29 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:14.924 04:54:29 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.924 04:54:29 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.924 04:54:29 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.924 04:54:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.924 04:54:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.924 04:54:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.924 04:54:29 json_config -- paths/export.sh@5 -- # export PATH 00:05:14.925 04:54:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.925 04:54:29 json_config -- nvmf/common.sh@47 -- # : 0 00:05:14.925 04:54:29 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:14.925 04:54:29 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:14.925 04:54:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.925 04:54:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.925 04:54:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.925 04:54:29 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:14.925 04:54:29 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:14.925 04:54:29 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@11 -- # [[ 1 -eq 1 ]] 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:05:14.925 04:54:29 json_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:14.925 INFO: JSON configuration test init 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:14.925 04:54:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.925 04:54:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:14.925 04:54:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.925 04:54:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.925 04:54:29 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:14.925 04:54:29 json_config -- json_config/common.sh@9 -- # local app=target 00:05:14.925 04:54:29 json_config -- json_config/common.sh@10 -- # shift 00:05:14.925 04:54:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.925 04:54:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.925 04:54:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.925 04:54:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.925 04:54:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.925 04:54:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59640 00:05:14.925 Waiting for target to run... 00:05:14.925 04:54:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.925 04:54:29 json_config -- json_config/common.sh@25 -- # waitforlisten 59640 /var/tmp/spdk_tgt.sock 00:05:14.925 04:54:29 json_config -- common/autotest_common.sh@829 -- # '[' -z 59640 ']' 00:05:14.925 04:54:29 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.925 04:54:29 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.925 04:54:29 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.925 04:54:29 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.925 04:54:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.925 04:54:29 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:15.184 [2024-07-24 04:54:29.669663] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:15.184 [2024-07-24 04:54:29.669834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59640 ] 00:05:15.750 [2024-07-24 04:54:30.092691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.750 [2024-07-24 04:54:30.290554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.010 04:54:30 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.010 00:05:16.010 04:54:30 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:16.010 04:54:30 json_config -- json_config/common.sh@26 -- # echo '' 00:05:16.010 04:54:30 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:16.010 04:54:30 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:16.010 04:54:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.010 04:54:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.010 04:54:30 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:16.010 04:54:30 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:16.010 04:54:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.010 04:54:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.010 04:54:30 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:16.010 04:54:30 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:16.010 04:54:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:16.578 [2024-07-24 04:54:31.056415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:17.146 04:54:31 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:17.146 04:54:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:17.146 04:54:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:17.146 04:54:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.146 04:54:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:17.146 04:54:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:17.146 04:54:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:17.146 04:54:31 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:17.146 04:54:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:17.146 04:54:31 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:17.406 04:54:31 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:17.406 04:54:31 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:17.406 04:54:31 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:17.406 04:54:31 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:17.406 04:54:31 json_config -- json_config/json_config.sh@51 -- # sort 00:05:17.406 04:54:31 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:17.406 04:54:31 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:17.406 04:54:31 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:17.406 04:54:31 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:17.406 04:54:31 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:17.406 04:54:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:17.406 04:54:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.406 04:54:32 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:17.406 04:54:32 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:17.406 04:54:32 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:17.406 04:54:32 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:17.406 04:54:32 json_config -- json_config/json_config.sh@291 -- # create_iscsi_subsystem_config 00:05:17.406 04:54:32 json_config -- json_config/json_config.sh@225 -- # timing_enter create_iscsi_subsystem_config 00:05:17.406 04:54:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:17.406 04:54:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.406 04:54:32 json_config -- json_config/json_config.sh@226 -- # tgt_rpc bdev_malloc_create 64 1024 --name MallocForIscsi0 00:05:17.406 04:54:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 64 1024 --name MallocForIscsi0 00:05:17.973 MallocForIscsi0 00:05:17.974 04:54:32 json_config -- json_config/json_config.sh@227 -- # tgt_rpc iscsi_create_portal_group 1 127.0.0.1:3260 00:05:17.974 04:54:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_portal_group 1 127.0.0.1:3260 00:05:17.974 04:54:32 json_config -- json_config/json_config.sh@228 -- # tgt_rpc iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:05:17.974 04:54:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:05:18.233 04:54:32 json_config -- json_config/json_config.sh@229 -- # tgt_rpc iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:05:18.233 04:54:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:05:18.491 04:54:32 json_config -- json_config/json_config.sh@230 -- # timing_exit create_iscsi_subsystem_config 00:05:18.491 04:54:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.492 04:54:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.492 04:54:33 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:05:18.492 04:54:33 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:18.492 04:54:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.492 04:54:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.492 04:54:33 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:18.492 04:54:33 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.492 04:54:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.750 MallocBdevForConfigChangeCheck 00:05:18.750 04:54:33 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:18.750 04:54:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.750 04:54:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.750 04:54:33 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:18.750 04:54:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.008 04:54:33 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:19.008 INFO: shutting down applications... 00:05:19.008 04:54:33 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:19.008 04:54:33 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:19.008 04:54:33 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:19.008 04:54:33 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:19.267 Calling clear_iscsi_subsystem 00:05:19.267 Calling clear_nvmf_subsystem 00:05:19.267 Calling clear_nbd_subsystem 00:05:19.267 Calling clear_ublk_subsystem 00:05:19.267 Calling clear_vhost_blk_subsystem 00:05:19.267 Calling clear_vhost_scsi_subsystem 00:05:19.267 Calling clear_bdev_subsystem 00:05:19.526 04:54:33 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:19.526 04:54:33 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:19.526 04:54:33 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:19.526 04:54:33 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.526 04:54:33 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:19.526 04:54:33 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:19.785 04:54:34 json_config -- json_config/json_config.sh@349 -- # break 00:05:19.785 04:54:34 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:19.785 04:54:34 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:19.785 04:54:34 json_config -- json_config/common.sh@31 -- # local app=target 00:05:19.785 04:54:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.785 04:54:34 json_config -- json_config/common.sh@35 -- # [[ -n 59640 ]] 00:05:19.785 04:54:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59640 00:05:19.785 04:54:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.785 04:54:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.785 04:54:34 json_config -- json_config/common.sh@41 -- # kill -0 59640 00:05:19.785 04:54:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.353 04:54:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.353 04:54:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.353 04:54:34 json_config -- json_config/common.sh@41 -- # kill -0 59640 00:05:20.353 04:54:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.922 04:54:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.922 04:54:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.922 04:54:35 json_config -- json_config/common.sh@41 -- # kill -0 59640 00:05:20.922 04:54:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.922 04:54:35 json_config -- json_config/common.sh@43 -- # break 00:05:20.922 04:54:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.922 SPDK target shutdown done 00:05:20.922 INFO: relaunching applications... 00:05:20.922 04:54:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.922 04:54:35 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:20.922 04:54:35 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:20.922 04:54:35 json_config -- json_config/common.sh@9 -- # local app=target 00:05:20.922 04:54:35 json_config -- json_config/common.sh@10 -- # shift 00:05:20.922 04:54:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:20.922 04:54:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:20.922 04:54:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:20.922 04:54:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.922 04:54:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.922 04:54:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59839 00:05:20.922 04:54:35 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:20.922 Waiting for target to run... 00:05:20.922 04:54:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:20.922 04:54:35 json_config -- json_config/common.sh@25 -- # waitforlisten 59839 /var/tmp/spdk_tgt.sock 00:05:20.922 04:54:35 json_config -- common/autotest_common.sh@829 -- # '[' -z 59839 ']' 00:05:20.922 04:54:35 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.922 04:54:35 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.922 04:54:35 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.922 04:54:35 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.922 04:54:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.922 [2024-07-24 04:54:35.438854] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:20.922 [2024-07-24 04:54:35.439025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59839 ] 00:05:21.490 [2024-07-24 04:54:35.824716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.490 [2024-07-24 04:54:36.024605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.749 [2024-07-24 04:54:36.340193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:22.686 00:05:22.686 04:54:37 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.686 04:54:37 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:22.686 04:54:37 json_config -- json_config/common.sh@26 -- # echo '' 00:05:22.686 04:54:37 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:22.686 04:54:37 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:22.686 INFO: Checking if target configuration is the same... 00:05:22.686 04:54:37 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.686 04:54:37 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:22.686 04:54:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.686 + '[' 2 -ne 2 ']' 00:05:22.686 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:22.686 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:22.686 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:22.686 +++ basename /dev/fd/62 00:05:22.686 ++ mktemp /tmp/62.XXX 00:05:22.686 + tmp_file_1=/tmp/62.Zy5 00:05:22.686 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.686 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.686 + tmp_file_2=/tmp/spdk_tgt_config.json.zbq 00:05:22.686 + ret=0 00:05:22.686 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:22.945 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:22.945 + diff -u /tmp/62.Zy5 /tmp/spdk_tgt_config.json.zbq 00:05:22.945 INFO: JSON config files are the same 00:05:22.945 + echo 'INFO: JSON config files are the same' 00:05:22.945 + rm /tmp/62.Zy5 /tmp/spdk_tgt_config.json.zbq 00:05:22.945 + exit 0 00:05:22.945 INFO: changing configuration and checking if this can be detected... 00:05:22.945 04:54:37 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:22.945 04:54:37 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:22.945 04:54:37 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.945 04:54:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.243 04:54:37 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.243 04:54:37 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:23.243 04:54:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.243 + '[' 2 -ne 2 ']' 00:05:23.243 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:23.243 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:23.243 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:23.243 +++ basename /dev/fd/62 00:05:23.525 ++ mktemp /tmp/62.XXX 00:05:23.525 + tmp_file_1=/tmp/62.UhR 00:05:23.525 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.525 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.525 + tmp_file_2=/tmp/spdk_tgt_config.json.wck 00:05:23.525 + ret=0 00:05:23.525 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:23.785 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:23.785 + diff -u /tmp/62.UhR /tmp/spdk_tgt_config.json.wck 00:05:23.785 + ret=1 00:05:23.785 + echo '=== Start of file: /tmp/62.UhR ===' 00:05:23.785 + cat /tmp/62.UhR 00:05:23.785 + echo '=== End of file: /tmp/62.UhR ===' 00:05:23.785 + echo '' 00:05:23.785 + echo '=== Start of file: /tmp/spdk_tgt_config.json.wck ===' 00:05:23.785 + cat /tmp/spdk_tgt_config.json.wck 00:05:23.785 + echo '=== End of file: /tmp/spdk_tgt_config.json.wck ===' 00:05:23.785 + echo '' 00:05:23.785 + rm /tmp/62.UhR /tmp/spdk_tgt_config.json.wck 00:05:23.785 + exit 1 00:05:23.785 INFO: configuration change detected. 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@321 -- # [[ -n 59839 ]] 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.785 04:54:38 json_config -- json_config/json_config.sh@327 -- # killprocess 59839 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@948 -- # '[' -z 59839 ']' 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@952 -- # kill -0 59839 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@953 -- # uname 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59839 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.785 killing process with pid 59839 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59839' 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@967 -- # kill 59839 00:05:23.785 04:54:38 json_config -- common/autotest_common.sh@972 -- # wait 59839 00:05:25.165 04:54:39 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:25.165 04:54:39 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:25.165 04:54:39 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.165 04:54:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.165 04:54:39 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:25.165 INFO: Success 00:05:25.165 04:54:39 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:25.165 ************************************ 00:05:25.165 END TEST json_config 00:05:25.165 ************************************ 00:05:25.165 00:05:25.165 real 0m9.976s 00:05:25.165 user 0m12.178s 00:05:25.165 sys 0m2.031s 00:05:25.165 04:54:39 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.165 04:54:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.165 04:54:39 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:25.165 04:54:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.165 04:54:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.165 04:54:39 -- common/autotest_common.sh@10 -- # set +x 00:05:25.166 ************************************ 00:05:25.166 START TEST json_config_extra_key 00:05:25.166 ************************************ 00:05:25.166 04:54:39 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:25.166 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a36f1e81-73a2-4b75-9a56-c42aa4d68100 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a36f1e81-73a2-4b75-9a56-c42aa4d68100 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:25.166 04:54:39 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.166 04:54:39 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.166 04:54:39 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.166 04:54:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.166 04:54:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.166 04:54:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.166 04:54:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:25.166 04:54:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:25.166 04:54:39 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:25.166 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:25.166 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:25.166 INFO: launching applications... 00:05:25.166 Waiting for target to run... 00:05:25.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.166 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:25.166 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:25.166 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:25.166 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:25.166 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:25.166 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:25.166 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:25.166 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:25.166 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:25.166 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:25.166 04:54:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:25.166 04:54:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:25.166 04:54:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:25.166 04:54:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:25.166 04:54:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:25.166 04:54:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.166 04:54:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.166 04:54:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59998 00:05:25.166 04:54:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:25.166 04:54:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59998 /var/tmp/spdk_tgt.sock 00:05:25.166 04:54:39 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59998 ']' 00:05:25.166 04:54:39 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.166 04:54:39 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.166 04:54:39 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.166 04:54:39 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.166 04:54:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:25.166 04:54:39 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:25.166 [2024-07-24 04:54:39.695272] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:25.166 [2024-07-24 04:54:39.695701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59998 ] 00:05:25.735 [2024-07-24 04:54:40.100483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.735 [2024-07-24 04:54:40.305049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.994 [2024-07-24 04:54:40.520816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:26.562 04:54:41 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.562 04:54:41 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:26.562 04:54:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:26.562 00:05:26.562 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:26.562 INFO: shutting down applications... 00:05:26.562 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:26.562 04:54:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:26.562 04:54:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:26.562 04:54:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59998 ]] 00:05:26.562 04:54:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59998 00:05:26.562 04:54:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:26.562 04:54:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.562 04:54:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59998 00:05:26.562 04:54:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.130 04:54:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.130 04:54:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.130 04:54:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59998 00:05:27.130 04:54:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.697 04:54:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.697 04:54:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.697 04:54:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59998 00:05:27.697 04:54:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.264 04:54:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.264 04:54:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.264 04:54:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59998 00:05:28.264 04:54:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.523 04:54:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.523 04:54:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.523 04:54:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59998 00:05:28.523 04:54:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.089 04:54:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.089 04:54:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.089 04:54:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59998 00:05:29.089 04:54:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.655 04:54:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.655 04:54:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.655 04:54:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59998 00:05:29.655 SPDK target shutdown done 00:05:29.655 Success 00:05:29.655 04:54:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:29.655 04:54:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:29.655 04:54:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:29.655 04:54:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:29.655 04:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:29.655 ************************************ 00:05:29.655 END TEST json_config_extra_key 00:05:29.655 ************************************ 00:05:29.655 00:05:29.655 real 0m4.641s 00:05:29.655 user 0m4.260s 00:05:29.655 sys 0m0.572s 00:05:29.655 04:54:44 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.655 04:54:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:29.655 04:54:44 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:29.655 04:54:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.655 04:54:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.655 04:54:44 -- common/autotest_common.sh@10 -- # set +x 00:05:29.656 ************************************ 00:05:29.656 START TEST alias_rpc 00:05:29.656 ************************************ 00:05:29.656 04:54:44 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:29.656 * Looking for test storage... 00:05:29.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:29.656 04:54:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:29.656 04:54:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60096 00:05:29.656 04:54:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.656 04:54:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60096 00:05:29.656 04:54:44 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 60096 ']' 00:05:29.656 04:54:44 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.656 04:54:44 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.656 04:54:44 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.656 04:54:44 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.656 04:54:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.914 [2024-07-24 04:54:44.412319] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:29.914 [2024-07-24 04:54:44.412494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60096 ] 00:05:30.173 [2024-07-24 04:54:44.594641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.432 [2024-07-24 04:54:44.811273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.432 [2024-07-24 04:54:45.033597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:31.368 04:54:45 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.368 04:54:45 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:31.368 04:54:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:31.368 04:54:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60096 00:05:31.368 04:54:45 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 60096 ']' 00:05:31.368 04:54:45 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 60096 00:05:31.368 04:54:45 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:31.368 04:54:45 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.368 04:54:45 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60096 00:05:31.368 killing process with pid 60096 00:05:31.368 04:54:45 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.368 04:54:45 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.368 04:54:45 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60096' 00:05:31.368 04:54:45 alias_rpc -- common/autotest_common.sh@967 -- # kill 60096 00:05:31.368 04:54:45 alias_rpc -- common/autotest_common.sh@972 -- # wait 60096 00:05:33.902 ************************************ 00:05:33.902 END TEST alias_rpc 00:05:33.902 ************************************ 00:05:33.902 00:05:33.902 real 0m4.243s 00:05:33.902 user 0m4.284s 00:05:33.902 sys 0m0.585s 00:05:33.902 04:54:48 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.902 04:54:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.902 04:54:48 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:33.902 04:54:48 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:33.902 04:54:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.902 04:54:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.902 04:54:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.902 ************************************ 00:05:33.902 START TEST spdkcli_tcp 00:05:33.902 ************************************ 00:05:33.902 04:54:48 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:34.161 * Looking for test storage... 00:05:34.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:34.162 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:34.162 04:54:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:34.162 04:54:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:34.162 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:34.162 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:34.162 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:34.162 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:34.162 04:54:48 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.162 04:54:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.162 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60201 00:05:34.162 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:34.162 04:54:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60201 00:05:34.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.162 04:54:48 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 60201 ']' 00:05:34.162 04:54:48 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.162 04:54:48 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.162 04:54:48 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.162 04:54:48 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.162 04:54:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.162 [2024-07-24 04:54:48.707257] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:34.162 [2024-07-24 04:54:48.707425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60201 ] 00:05:34.421 [2024-07-24 04:54:48.888339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.680 [2024-07-24 04:54:49.107363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.680 [2024-07-24 04:54:49.107383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.939 [2024-07-24 04:54:49.344025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:35.507 04:54:50 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.507 04:54:50 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:35.507 04:54:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60223 00:05:35.507 04:54:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:35.507 04:54:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:35.767 [ 00:05:35.767 "bdev_malloc_delete", 00:05:35.767 "bdev_malloc_create", 00:05:35.767 "bdev_null_resize", 00:05:35.767 "bdev_null_delete", 00:05:35.767 "bdev_null_create", 00:05:35.767 "bdev_nvme_cuse_unregister", 00:05:35.767 "bdev_nvme_cuse_register", 00:05:35.767 "bdev_opal_new_user", 00:05:35.767 "bdev_opal_set_lock_state", 00:05:35.767 "bdev_opal_delete", 00:05:35.767 "bdev_opal_get_info", 00:05:35.767 "bdev_opal_create", 00:05:35.767 "bdev_nvme_opal_revert", 00:05:35.767 "bdev_nvme_opal_init", 00:05:35.767 "bdev_nvme_send_cmd", 00:05:35.767 "bdev_nvme_get_path_iostat", 00:05:35.767 "bdev_nvme_get_mdns_discovery_info", 00:05:35.767 "bdev_nvme_stop_mdns_discovery", 00:05:35.767 "bdev_nvme_start_mdns_discovery", 00:05:35.767 "bdev_nvme_set_multipath_policy", 00:05:35.767 "bdev_nvme_set_preferred_path", 00:05:35.767 "bdev_nvme_get_io_paths", 00:05:35.767 "bdev_nvme_remove_error_injection", 00:05:35.767 "bdev_nvme_add_error_injection", 00:05:35.767 "bdev_nvme_get_discovery_info", 00:05:35.767 "bdev_nvme_stop_discovery", 00:05:35.767 "bdev_nvme_start_discovery", 00:05:35.767 "bdev_nvme_get_controller_health_info", 00:05:35.767 "bdev_nvme_disable_controller", 00:05:35.767 "bdev_nvme_enable_controller", 00:05:35.767 "bdev_nvme_reset_controller", 00:05:35.767 "bdev_nvme_get_transport_statistics", 00:05:35.767 "bdev_nvme_apply_firmware", 00:05:35.767 "bdev_nvme_detach_controller", 00:05:35.767 "bdev_nvme_get_controllers", 00:05:35.767 "bdev_nvme_attach_controller", 00:05:35.767 "bdev_nvme_set_hotplug", 00:05:35.767 "bdev_nvme_set_options", 00:05:35.767 "bdev_passthru_delete", 00:05:35.767 "bdev_passthru_create", 00:05:35.767 "bdev_lvol_set_parent_bdev", 00:05:35.767 "bdev_lvol_set_parent", 00:05:35.767 "bdev_lvol_check_shallow_copy", 00:05:35.767 "bdev_lvol_start_shallow_copy", 00:05:35.767 "bdev_lvol_grow_lvstore", 00:05:35.767 "bdev_lvol_get_lvols", 00:05:35.767 "bdev_lvol_get_lvstores", 00:05:35.767 "bdev_lvol_delete", 00:05:35.767 "bdev_lvol_set_read_only", 00:05:35.767 "bdev_lvol_resize", 00:05:35.767 "bdev_lvol_decouple_parent", 00:05:35.767 "bdev_lvol_inflate", 00:05:35.767 "bdev_lvol_rename", 00:05:35.767 "bdev_lvol_clone_bdev", 00:05:35.767 "bdev_lvol_clone", 00:05:35.767 "bdev_lvol_snapshot", 00:05:35.767 "bdev_lvol_create", 00:05:35.767 "bdev_lvol_delete_lvstore", 00:05:35.767 "bdev_lvol_rename_lvstore", 00:05:35.767 "bdev_lvol_create_lvstore", 00:05:35.767 "bdev_raid_set_options", 00:05:35.767 "bdev_raid_remove_base_bdev", 00:05:35.767 "bdev_raid_add_base_bdev", 00:05:35.767 "bdev_raid_delete", 00:05:35.767 "bdev_raid_create", 00:05:35.767 "bdev_raid_get_bdevs", 00:05:35.767 "bdev_error_inject_error", 00:05:35.767 "bdev_error_delete", 00:05:35.767 "bdev_error_create", 00:05:35.767 "bdev_split_delete", 00:05:35.767 "bdev_split_create", 00:05:35.767 "bdev_delay_delete", 00:05:35.767 "bdev_delay_create", 00:05:35.767 "bdev_delay_update_latency", 00:05:35.767 "bdev_zone_block_delete", 00:05:35.767 "bdev_zone_block_create", 00:05:35.767 "blobfs_create", 00:05:35.767 "blobfs_detect", 00:05:35.767 "blobfs_set_cache_size", 00:05:35.767 "bdev_aio_delete", 00:05:35.767 "bdev_aio_rescan", 00:05:35.767 "bdev_aio_create", 00:05:35.767 "bdev_ftl_set_property", 00:05:35.767 "bdev_ftl_get_properties", 00:05:35.767 "bdev_ftl_get_stats", 00:05:35.767 "bdev_ftl_unmap", 00:05:35.767 "bdev_ftl_unload", 00:05:35.767 "bdev_ftl_delete", 00:05:35.767 "bdev_ftl_load", 00:05:35.767 "bdev_ftl_create", 00:05:35.767 "bdev_virtio_attach_controller", 00:05:35.767 "bdev_virtio_scsi_get_devices", 00:05:35.767 "bdev_virtio_detach_controller", 00:05:35.767 "bdev_virtio_blk_set_hotplug", 00:05:35.767 "bdev_iscsi_delete", 00:05:35.767 "bdev_iscsi_create", 00:05:35.767 "bdev_iscsi_set_options", 00:05:35.767 "bdev_uring_delete", 00:05:35.767 "bdev_uring_rescan", 00:05:35.767 "bdev_uring_create", 00:05:35.767 "accel_error_inject_error", 00:05:35.767 "ioat_scan_accel_module", 00:05:35.767 "dsa_scan_accel_module", 00:05:35.767 "iaa_scan_accel_module", 00:05:35.767 "keyring_file_remove_key", 00:05:35.767 "keyring_file_add_key", 00:05:35.767 "keyring_linux_set_options", 00:05:35.767 "iscsi_get_histogram", 00:05:35.767 "iscsi_enable_histogram", 00:05:35.767 "iscsi_set_options", 00:05:35.767 "iscsi_get_auth_groups", 00:05:35.767 "iscsi_auth_group_remove_secret", 00:05:35.767 "iscsi_auth_group_add_secret", 00:05:35.767 "iscsi_delete_auth_group", 00:05:35.767 "iscsi_create_auth_group", 00:05:35.767 "iscsi_set_discovery_auth", 00:05:35.767 "iscsi_get_options", 00:05:35.767 "iscsi_target_node_request_logout", 00:05:35.767 "iscsi_target_node_set_redirect", 00:05:35.767 "iscsi_target_node_set_auth", 00:05:35.767 "iscsi_target_node_add_lun", 00:05:35.767 "iscsi_get_stats", 00:05:35.767 "iscsi_get_connections", 00:05:35.767 "iscsi_portal_group_set_auth", 00:05:35.767 "iscsi_start_portal_group", 00:05:35.767 "iscsi_delete_portal_group", 00:05:35.767 "iscsi_create_portal_group", 00:05:35.768 "iscsi_get_portal_groups", 00:05:35.768 "iscsi_delete_target_node", 00:05:35.768 "iscsi_target_node_remove_pg_ig_maps", 00:05:35.768 "iscsi_target_node_add_pg_ig_maps", 00:05:35.768 "iscsi_create_target_node", 00:05:35.768 "iscsi_get_target_nodes", 00:05:35.768 "iscsi_delete_initiator_group", 00:05:35.768 "iscsi_initiator_group_remove_initiators", 00:05:35.768 "iscsi_initiator_group_add_initiators", 00:05:35.768 "iscsi_create_initiator_group", 00:05:35.768 "iscsi_get_initiator_groups", 00:05:35.768 "nvmf_set_crdt", 00:05:35.768 "nvmf_set_config", 00:05:35.768 "nvmf_set_max_subsystems", 00:05:35.768 "nvmf_stop_mdns_prr", 00:05:35.768 "nvmf_publish_mdns_prr", 00:05:35.768 "nvmf_subsystem_get_listeners", 00:05:35.768 "nvmf_subsystem_get_qpairs", 00:05:35.768 "nvmf_subsystem_get_controllers", 00:05:35.768 "nvmf_get_stats", 00:05:35.768 "nvmf_get_transports", 00:05:35.768 "nvmf_create_transport", 00:05:35.768 "nvmf_get_targets", 00:05:35.768 "nvmf_delete_target", 00:05:35.768 "nvmf_create_target", 00:05:35.768 "nvmf_subsystem_allow_any_host", 00:05:35.768 "nvmf_subsystem_remove_host", 00:05:35.768 "nvmf_subsystem_add_host", 00:05:35.768 "nvmf_ns_remove_host", 00:05:35.768 "nvmf_ns_add_host", 00:05:35.768 "nvmf_subsystem_remove_ns", 00:05:35.768 "nvmf_subsystem_add_ns", 00:05:35.768 "nvmf_subsystem_listener_set_ana_state", 00:05:35.768 "nvmf_discovery_get_referrals", 00:05:35.768 "nvmf_discovery_remove_referral", 00:05:35.768 "nvmf_discovery_add_referral", 00:05:35.768 "nvmf_subsystem_remove_listener", 00:05:35.768 "nvmf_subsystem_add_listener", 00:05:35.768 "nvmf_delete_subsystem", 00:05:35.768 "nvmf_create_subsystem", 00:05:35.768 "nvmf_get_subsystems", 00:05:35.768 "env_dpdk_get_mem_stats", 00:05:35.768 "nbd_get_disks", 00:05:35.768 "nbd_stop_disk", 00:05:35.768 "nbd_start_disk", 00:05:35.768 "ublk_recover_disk", 00:05:35.768 "ublk_get_disks", 00:05:35.768 "ublk_stop_disk", 00:05:35.768 "ublk_start_disk", 00:05:35.768 "ublk_destroy_target", 00:05:35.768 "ublk_create_target", 00:05:35.768 "virtio_blk_create_transport", 00:05:35.768 "virtio_blk_get_transports", 00:05:35.768 "vhost_controller_set_coalescing", 00:05:35.768 "vhost_get_controllers", 00:05:35.768 "vhost_delete_controller", 00:05:35.768 "vhost_create_blk_controller", 00:05:35.768 "vhost_scsi_controller_remove_target", 00:05:35.768 "vhost_scsi_controller_add_target", 00:05:35.768 "vhost_start_scsi_controller", 00:05:35.768 "vhost_create_scsi_controller", 00:05:35.768 "thread_set_cpumask", 00:05:35.768 "framework_get_governor", 00:05:35.768 "framework_get_scheduler", 00:05:35.768 "framework_set_scheduler", 00:05:35.768 "framework_get_reactors", 00:05:35.768 "thread_get_io_channels", 00:05:35.768 "thread_get_pollers", 00:05:35.768 "thread_get_stats", 00:05:35.768 "framework_monitor_context_switch", 00:05:35.768 "spdk_kill_instance", 00:05:35.768 "log_enable_timestamps", 00:05:35.768 "log_get_flags", 00:05:35.768 "log_clear_flag", 00:05:35.768 "log_set_flag", 00:05:35.768 "log_get_level", 00:05:35.768 "log_set_level", 00:05:35.768 "log_get_print_level", 00:05:35.768 "log_set_print_level", 00:05:35.768 "framework_enable_cpumask_locks", 00:05:35.768 "framework_disable_cpumask_locks", 00:05:35.768 "framework_wait_init", 00:05:35.768 "framework_start_init", 00:05:35.768 "scsi_get_devices", 00:05:35.768 "bdev_get_histogram", 00:05:35.768 "bdev_enable_histogram", 00:05:35.768 "bdev_set_qos_limit", 00:05:35.768 "bdev_set_qd_sampling_period", 00:05:35.768 "bdev_get_bdevs", 00:05:35.768 "bdev_reset_iostat", 00:05:35.768 "bdev_get_iostat", 00:05:35.768 "bdev_examine", 00:05:35.768 "bdev_wait_for_examine", 00:05:35.768 "bdev_set_options", 00:05:35.768 "notify_get_notifications", 00:05:35.768 "notify_get_types", 00:05:35.768 "accel_get_stats", 00:05:35.768 "accel_set_options", 00:05:35.768 "accel_set_driver", 00:05:35.768 "accel_crypto_key_destroy", 00:05:35.768 "accel_crypto_keys_get", 00:05:35.768 "accel_crypto_key_create", 00:05:35.768 "accel_assign_opc", 00:05:35.768 "accel_get_module_info", 00:05:35.768 "accel_get_opc_assignments", 00:05:35.768 "vmd_rescan", 00:05:35.768 "vmd_remove_device", 00:05:35.768 "vmd_enable", 00:05:35.768 "sock_get_default_impl", 00:05:35.768 "sock_set_default_impl", 00:05:35.768 "sock_impl_set_options", 00:05:35.768 "sock_impl_get_options", 00:05:35.768 "iobuf_get_stats", 00:05:35.768 "iobuf_set_options", 00:05:35.768 "framework_get_pci_devices", 00:05:35.768 "framework_get_config", 00:05:35.768 "framework_get_subsystems", 00:05:35.768 "trace_get_info", 00:05:35.768 "trace_get_tpoint_group_mask", 00:05:35.768 "trace_disable_tpoint_group", 00:05:35.768 "trace_enable_tpoint_group", 00:05:35.768 "trace_clear_tpoint_mask", 00:05:35.768 "trace_set_tpoint_mask", 00:05:35.768 "keyring_get_keys", 00:05:35.768 "spdk_get_version", 00:05:35.768 "rpc_get_methods" 00:05:35.768 ] 00:05:35.768 04:54:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:35.768 04:54:50 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.768 04:54:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.768 04:54:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:35.768 04:54:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60201 00:05:35.768 04:54:50 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 60201 ']' 00:05:35.768 04:54:50 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 60201 00:05:35.768 04:54:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:35.768 04:54:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.768 04:54:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60201 00:05:35.768 killing process with pid 60201 00:05:35.768 04:54:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.768 04:54:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.768 04:54:50 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60201' 00:05:35.768 04:54:50 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 60201 00:05:35.768 04:54:50 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 60201 00:05:38.305 ************************************ 00:05:38.305 END TEST spdkcli_tcp 00:05:38.305 ************************************ 00:05:38.305 00:05:38.305 real 0m4.315s 00:05:38.305 user 0m7.547s 00:05:38.305 sys 0m0.588s 00:05:38.305 04:54:52 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.305 04:54:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.305 04:54:52 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.305 04:54:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.305 04:54:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.305 04:54:52 -- common/autotest_common.sh@10 -- # set +x 00:05:38.305 ************************************ 00:05:38.305 START TEST dpdk_mem_utility 00:05:38.305 ************************************ 00:05:38.305 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.564 * Looking for test storage... 00:05:38.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:38.564 04:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:38.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.564 04:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60315 00:05:38.564 04:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.564 04:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60315 00:05:38.564 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 60315 ']' 00:05:38.564 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.564 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.564 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.564 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.564 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.564 [2024-07-24 04:54:53.042466] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:38.564 [2024-07-24 04:54:53.042594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60315 ] 00:05:38.824 [2024-07-24 04:54:53.204127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.824 [2024-07-24 04:54:53.415920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.082 [2024-07-24 04:54:53.643330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:40.022 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.022 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:40.022 04:54:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:40.022 04:54:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:40.022 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.022 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.022 { 00:05:40.022 "filename": "/tmp/spdk_mem_dump.txt" 00:05:40.022 } 00:05:40.022 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.022 04:54:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:40.022 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:40.022 1 heaps totaling size 820.000000 MiB 00:05:40.022 size: 820.000000 MiB heap id: 0 00:05:40.022 end heaps---------- 00:05:40.022 8 mempools totaling size 598.116089 MiB 00:05:40.022 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:40.022 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:40.022 size: 84.521057 MiB name: bdev_io_60315 00:05:40.022 size: 51.011292 MiB name: evtpool_60315 00:05:40.022 size: 50.003479 MiB name: msgpool_60315 00:05:40.022 size: 21.763794 MiB name: PDU_Pool 00:05:40.022 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:40.022 size: 0.026123 MiB name: Session_Pool 00:05:40.022 end mempools------- 00:05:40.022 6 memzones totaling size 4.142822 MiB 00:05:40.022 size: 1.000366 MiB name: RG_ring_0_60315 00:05:40.022 size: 1.000366 MiB name: RG_ring_1_60315 00:05:40.022 size: 1.000366 MiB name: RG_ring_4_60315 00:05:40.022 size: 1.000366 MiB name: RG_ring_5_60315 00:05:40.022 size: 0.125366 MiB name: RG_ring_2_60315 00:05:40.022 size: 0.015991 MiB name: RG_ring_3_60315 00:05:40.022 end memzones------- 00:05:40.022 04:54:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:40.022 heap id: 0 total size: 820.000000 MiB number of busy elements: 300 number of free elements: 18 00:05:40.022 list of free elements. size: 18.451538 MiB 00:05:40.022 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:40.022 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:40.022 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:40.022 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:40.022 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:40.022 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:40.022 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:40.022 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:40.022 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:40.022 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:40.022 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:40.022 element at address: 0x200000200000 with size: 0.829956 MiB 00:05:40.022 element at address: 0x20001b000000 with size: 0.564148 MiB 00:05:40.022 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:40.022 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:40.022 element at address: 0x200013800000 with size: 0.467896 MiB 00:05:40.022 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:40.022 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:40.022 list of standard malloc elements. size: 199.284058 MiB 00:05:40.022 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:40.022 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:40.022 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:40.022 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:40.022 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:40.022 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:40.022 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:40.022 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:40.022 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:40.022 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:40.022 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:40.022 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:40.022 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:40.023 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:40.024 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:40.024 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:40.024 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:40.024 list of memzone associated elements. size: 602.264404 MiB 00:05:40.024 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:40.024 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:40.024 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:40.024 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:40.024 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:40.024 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_60315_0 00:05:40.024 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:40.024 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60315_0 00:05:40.024 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:40.024 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60315_0 00:05:40.024 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:40.024 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:40.024 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:40.024 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:40.024 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:40.024 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60315 00:05:40.024 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:40.024 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60315 00:05:40.024 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:40.024 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60315 00:05:40.024 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:40.024 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:40.024 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:40.024 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:40.024 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:40.024 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:40.024 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:40.024 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:40.024 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:40.024 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60315 00:05:40.024 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:40.024 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60315 00:05:40.024 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:40.024 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60315 00:05:40.024 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:40.024 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60315 00:05:40.024 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:40.024 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60315 00:05:40.024 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:40.024 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:40.024 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:40.024 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:40.024 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:40.024 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:40.024 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:40.024 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60315 00:05:40.024 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:40.024 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:40.024 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:40.024 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:40.025 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:40.025 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60315 00:05:40.025 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:40.025 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:40.025 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:40.025 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60315 00:05:40.025 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:40.025 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60315 00:05:40.025 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:40.025 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:40.025 04:54:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:40.025 04:54:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60315 00:05:40.025 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 60315 ']' 00:05:40.025 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 60315 00:05:40.025 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:40.025 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.025 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60315 00:05:40.025 killing process with pid 60315 00:05:40.025 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.025 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.025 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60315' 00:05:40.025 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 60315 00:05:40.025 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 60315 00:05:42.562 00:05:42.562 real 0m4.044s 00:05:42.562 user 0m4.007s 00:05:42.562 sys 0m0.522s 00:05:42.562 04:54:56 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.562 ************************************ 00:05:42.562 END TEST dpdk_mem_utility 00:05:42.562 ************************************ 00:05:42.562 04:54:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.562 04:54:56 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.562 04:54:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.562 04:54:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.562 04:54:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.562 ************************************ 00:05:42.562 START TEST event 00:05:42.562 ************************************ 00:05:42.562 04:54:56 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.562 * Looking for test storage... 00:05:42.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:42.562 04:54:57 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:42.562 04:54:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:42.562 04:54:57 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.562 04:54:57 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:42.562 04:54:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.562 04:54:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.562 ************************************ 00:05:42.562 START TEST event_perf 00:05:42.562 ************************************ 00:05:42.562 04:54:57 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.562 Running I/O for 1 seconds...[2024-07-24 04:54:57.108387] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:42.562 [2024-07-24 04:54:57.108759] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60415 ] 00:05:42.821 [2024-07-24 04:54:57.289144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.081 [2024-07-24 04:54:57.509691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.081 [2024-07-24 04:54:57.509811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.081 Running I/O for 1 seconds...[2024-07-24 04:54:57.509947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.081 [2024-07-24 04:54:57.509985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.494 00:05:44.494 lcore 0: 194640 00:05:44.494 lcore 1: 194640 00:05:44.494 lcore 2: 194639 00:05:44.494 lcore 3: 194641 00:05:44.494 done. 00:05:44.494 00:05:44.494 ************************************ 00:05:44.494 END TEST event_perf 00:05:44.494 ************************************ 00:05:44.494 real 0m1.879s 00:05:44.494 user 0m4.615s 00:05:44.494 sys 0m0.141s 00:05:44.494 04:54:58 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.494 04:54:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.494 04:54:58 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.494 04:54:58 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:44.494 04:54:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.494 04:54:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.494 ************************************ 00:05:44.494 START TEST event_reactor 00:05:44.494 ************************************ 00:05:44.494 04:54:58 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.494 [2024-07-24 04:54:59.048004] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:44.494 [2024-07-24 04:54:59.048161] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60460 ] 00:05:44.778 [2024-07-24 04:54:59.231742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.036 [2024-07-24 04:54:59.446212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.414 test_start 00:05:46.414 oneshot 00:05:46.414 tick 100 00:05:46.414 tick 100 00:05:46.414 tick 250 00:05:46.414 tick 100 00:05:46.414 tick 100 00:05:46.414 tick 100 00:05:46.414 tick 250 00:05:46.414 tick 500 00:05:46.414 tick 100 00:05:46.414 tick 100 00:05:46.414 tick 250 00:05:46.414 tick 100 00:05:46.414 tick 100 00:05:46.414 test_end 00:05:46.414 00:05:46.414 real 0m1.869s 00:05:46.414 user 0m1.625s 00:05:46.414 sys 0m0.134s 00:05:46.414 ************************************ 00:05:46.414 END TEST event_reactor 00:05:46.414 ************************************ 00:05:46.414 04:55:00 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.414 04:55:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:46.414 04:55:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:46.414 04:55:00 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:46.414 04:55:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.414 04:55:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.414 ************************************ 00:05:46.414 START TEST event_reactor_perf 00:05:46.414 ************************************ 00:05:46.414 04:55:00 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:46.414 [2024-07-24 04:55:00.979787] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:46.414 [2024-07-24 04:55:00.979944] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60502 ] 00:05:46.673 [2024-07-24 04:55:01.160340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.932 [2024-07-24 04:55:01.372859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.310 test_start 00:05:48.310 test_end 00:05:48.310 Performance: 389666 events per second 00:05:48.310 ************************************ 00:05:48.310 END TEST event_reactor_perf 00:05:48.310 00:05:48.310 real 0m1.865s 00:05:48.310 user 0m1.611s 00:05:48.310 sys 0m0.145s 00:05:48.310 04:55:02 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.310 04:55:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.310 ************************************ 00:05:48.310 04:55:02 event -- event/event.sh@49 -- # uname -s 00:05:48.310 04:55:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:48.310 04:55:02 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:48.310 04:55:02 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.310 04:55:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.310 04:55:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.310 ************************************ 00:05:48.310 START TEST event_scheduler 00:05:48.310 ************************************ 00:05:48.310 04:55:02 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:48.569 * Looking for test storage... 00:05:48.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:48.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.569 04:55:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:48.570 04:55:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60570 00:05:48.570 04:55:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.570 04:55:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60570 00:05:48.570 04:55:02 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60570 ']' 00:05:48.570 04:55:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:48.570 04:55:02 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.570 04:55:02 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.570 04:55:02 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.570 04:55:02 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.570 04:55:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.570 [2024-07-24 04:55:03.048173] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:48.570 [2024-07-24 04:55:03.048311] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60570 ] 00:05:48.829 [2024-07-24 04:55:03.217503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.088 [2024-07-24 04:55:03.478757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.088 [2024-07-24 04:55:03.478915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.088 [2024-07-24 04:55:03.479899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.088 [2024-07-24 04:55:03.479927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.655 04:55:03 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.655 04:55:03 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:49.655 04:55:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:49.655 04:55:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.655 04:55:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.655 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:49.655 POWER: Cannot set governor of lcore 0 to userspace 00:05:49.655 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:49.655 POWER: Cannot set governor of lcore 0 to performance 00:05:49.655 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:49.655 POWER: Cannot set governor of lcore 0 to userspace 00:05:49.655 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:49.655 POWER: Cannot set governor of lcore 0 to userspace 00:05:49.655 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:49.655 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:49.655 POWER: Unable to set Power Management Environment for lcore 0 00:05:49.655 [2024-07-24 04:55:03.993411] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:49.655 [2024-07-24 04:55:03.993428] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:49.655 [2024-07-24 04:55:03.993445] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:49.655 [2024-07-24 04:55:03.993477] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:49.655 [2024-07-24 04:55:03.993492] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:49.655 [2024-07-24 04:55:03.993503] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:49.655 04:55:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.655 04:55:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:49.655 04:55:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.655 04:55:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.655 [2024-07-24 04:55:04.224208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:49.914 [2024-07-24 04:55:04.334908] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:49.914 04:55:04 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.914 04:55:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:49.914 04:55:04 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.914 04:55:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.914 04:55:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.914 ************************************ 00:05:49.914 START TEST scheduler_create_thread 00:05:49.914 ************************************ 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.914 2 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.914 3 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.914 4 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.914 5 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.914 6 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.914 7 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.914 8 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.914 9 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.914 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.914 10 00:05:49.915 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.915 04:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:49.915 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.915 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.915 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.915 04:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:49.915 04:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:49.915 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.915 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.915 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.915 04:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:49.915 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.915 04:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.849 04:55:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.849 04:55:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:50.849 04:55:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:50.849 04:55:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.849 04:55:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.225 ************************************ 00:05:52.225 END TEST scheduler_create_thread 00:05:52.225 ************************************ 00:05:52.225 04:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.225 00:05:52.225 real 0m2.140s 00:05:52.225 user 0m0.019s 00:05:52.225 sys 0m0.008s 00:05:52.225 04:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.225 04:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.225 04:55:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:52.225 04:55:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60570 00:05:52.225 04:55:06 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60570 ']' 00:05:52.225 04:55:06 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60570 00:05:52.225 04:55:06 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:52.225 04:55:06 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.225 04:55:06 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60570 00:05:52.225 killing process with pid 60570 00:05:52.225 04:55:06 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:52.225 04:55:06 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:52.225 04:55:06 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60570' 00:05:52.225 04:55:06 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60570 00:05:52.225 04:55:06 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60570 00:05:52.483 [2024-07-24 04:55:06.967822] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:53.861 00:05:53.861 real 0m5.419s 00:05:53.861 user 0m8.845s 00:05:53.861 sys 0m0.468s 00:05:53.861 ************************************ 00:05:53.861 END TEST event_scheduler 00:05:53.861 ************************************ 00:05:53.861 04:55:08 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.861 04:55:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.861 04:55:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:53.861 04:55:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:53.861 04:55:08 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.861 04:55:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.861 04:55:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.861 ************************************ 00:05:53.861 START TEST app_repeat 00:05:53.861 ************************************ 00:05:53.861 04:55:08 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:53.861 04:55:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.861 04:55:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.861 04:55:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:53.861 04:55:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.861 04:55:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:53.861 04:55:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:53.861 04:55:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:53.861 Process app_repeat pid: 60676 00:05:53.861 spdk_app_start Round 0 00:05:53.861 04:55:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60676 00:05:53.861 04:55:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.861 04:55:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60676' 00:05:53.861 04:55:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.861 04:55:08 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:53.861 04:55:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:53.861 04:55:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60676 /var/tmp/spdk-nbd.sock 00:05:53.861 04:55:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60676 ']' 00:05:53.861 04:55:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.861 04:55:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.861 04:55:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.861 04:55:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.861 04:55:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.861 [2024-07-24 04:55:08.406861] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:05:53.861 [2024-07-24 04:55:08.406974] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60676 ] 00:05:54.120 [2024-07-24 04:55:08.568922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.379 [2024-07-24 04:55:08.788031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.379 [2024-07-24 04:55:08.788062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.638 [2024-07-24 04:55:09.025765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:54.897 04:55:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.897 04:55:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:54.897 04:55:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.156 Malloc0 00:05:55.156 04:55:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.415 Malloc1 00:05:55.415 04:55:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.415 04:55:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.675 /dev/nbd0 00:05:55.675 04:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.675 04:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.675 04:55:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:55.675 04:55:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:55.675 04:55:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.675 04:55:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.675 04:55:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:55.675 04:55:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:55.675 04:55:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.675 04:55:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.675 04:55:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.675 1+0 records in 00:05:55.675 1+0 records out 00:05:55.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000166833 s, 24.6 MB/s 00:05:55.675 04:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.675 04:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:55.675 04:55:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.675 04:55:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.675 04:55:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:55.675 04:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.675 04:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.675 04:55:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.934 /dev/nbd1 00:05:55.934 04:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.934 04:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.934 04:55:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:55.934 04:55:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:55.934 04:55:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.934 04:55:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.934 04:55:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:55.934 04:55:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:55.934 04:55:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.934 04:55:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.934 04:55:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.934 1+0 records in 00:05:55.934 1+0 records out 00:05:55.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315217 s, 13.0 MB/s 00:05:55.934 04:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.934 04:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:55.934 04:55:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.934 04:55:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.934 04:55:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:55.934 04:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.934 04:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.934 04:55:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.934 04:55:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.934 04:55:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.193 { 00:05:56.193 "nbd_device": "/dev/nbd0", 00:05:56.193 "bdev_name": "Malloc0" 00:05:56.193 }, 00:05:56.193 { 00:05:56.193 "nbd_device": "/dev/nbd1", 00:05:56.193 "bdev_name": "Malloc1" 00:05:56.193 } 00:05:56.193 ]' 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.193 { 00:05:56.193 "nbd_device": "/dev/nbd0", 00:05:56.193 "bdev_name": "Malloc0" 00:05:56.193 }, 00:05:56.193 { 00:05:56.193 "nbd_device": "/dev/nbd1", 00:05:56.193 "bdev_name": "Malloc1" 00:05:56.193 } 00:05:56.193 ]' 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.193 /dev/nbd1' 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.193 /dev/nbd1' 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.193 256+0 records in 00:05:56.193 256+0 records out 00:05:56.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00616859 s, 170 MB/s 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.193 256+0 records in 00:05:56.193 256+0 records out 00:05:56.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278651 s, 37.6 MB/s 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.193 256+0 records in 00:05:56.193 256+0 records out 00:05:56.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0323075 s, 32.5 MB/s 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.193 04:55:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.194 04:55:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.194 04:55:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.194 04:55:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.194 04:55:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.194 04:55:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.453 04:55:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.453 04:55:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.453 04:55:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.453 04:55:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.453 04:55:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.453 04:55:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.453 04:55:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.453 04:55:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.453 04:55:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.453 04:55:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.712 04:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.712 04:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.712 04:55:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.712 04:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.712 04:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.712 04:55:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.712 04:55:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.712 04:55:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.712 04:55:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.712 04:55:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.712 04:55:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.971 04:55:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.971 04:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.971 04:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.971 04:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.971 04:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.971 04:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.971 04:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.971 04:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.971 04:55:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.971 04:55:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.971 04:55:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.971 04:55:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.971 04:55:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.539 04:55:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:58.917 [2024-07-24 04:55:13.232198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.917 [2024-07-24 04:55:13.441134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.917 [2024-07-24 04:55:13.441138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.175 [2024-07-24 04:55:13.667236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:59.175 [2024-07-24 04:55:13.667354] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.175 [2024-07-24 04:55:13.667375] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.574 spdk_app_start Round 1 00:06:00.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.574 04:55:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.574 04:55:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:00.574 04:55:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60676 /var/tmp/spdk-nbd.sock 00:06:00.574 04:55:14 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60676 ']' 00:06:00.574 04:55:14 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.574 04:55:14 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.574 04:55:14 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.574 04:55:14 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.574 04:55:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.574 04:55:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.574 04:55:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:00.574 04:55:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.833 Malloc0 00:06:00.833 04:55:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.091 Malloc1 00:06:01.091 04:55:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.091 04:55:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.351 /dev/nbd0 00:06:01.351 04:55:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.351 04:55:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.351 04:55:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:01.351 04:55:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:01.351 04:55:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.351 04:55:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.351 04:55:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:01.351 04:55:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:01.351 04:55:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.351 04:55:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.351 04:55:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.351 1+0 records in 00:06:01.351 1+0 records out 00:06:01.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278407 s, 14.7 MB/s 00:06:01.351 04:55:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.351 04:55:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:01.351 04:55:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.351 04:55:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.351 04:55:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:01.351 04:55:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.351 04:55:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.351 04:55:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.610 /dev/nbd1 00:06:01.610 04:55:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.610 04:55:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.610 04:55:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:01.610 04:55:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:01.610 04:55:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.610 04:55:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.610 04:55:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:01.610 04:55:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:01.610 04:55:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.610 04:55:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.610 04:55:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.610 1+0 records in 00:06:01.610 1+0 records out 00:06:01.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324977 s, 12.6 MB/s 00:06:01.610 04:55:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.610 04:55:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:01.610 04:55:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.610 04:55:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.610 04:55:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:01.610 04:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.610 04:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.610 04:55:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.610 04:55:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.610 04:55:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.870 { 00:06:01.870 "nbd_device": "/dev/nbd0", 00:06:01.870 "bdev_name": "Malloc0" 00:06:01.870 }, 00:06:01.870 { 00:06:01.870 "nbd_device": "/dev/nbd1", 00:06:01.870 "bdev_name": "Malloc1" 00:06:01.870 } 00:06:01.870 ]' 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.870 { 00:06:01.870 "nbd_device": "/dev/nbd0", 00:06:01.870 "bdev_name": "Malloc0" 00:06:01.870 }, 00:06:01.870 { 00:06:01.870 "nbd_device": "/dev/nbd1", 00:06:01.870 "bdev_name": "Malloc1" 00:06:01.870 } 00:06:01.870 ]' 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.870 /dev/nbd1' 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.870 /dev/nbd1' 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.870 256+0 records in 00:06:01.870 256+0 records out 00:06:01.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00836794 s, 125 MB/s 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.870 256+0 records in 00:06:01.870 256+0 records out 00:06:01.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276399 s, 37.9 MB/s 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.870 04:55:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.129 256+0 records in 00:06:02.129 256+0 records out 00:06:02.129 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256721 s, 40.8 MB/s 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.129 04:55:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.389 04:55:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.389 04:55:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.389 04:55:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.389 04:55:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.389 04:55:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.389 04:55:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.389 04:55:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.389 04:55:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.389 04:55:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.389 04:55:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.648 04:55:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.648 04:55:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.648 04:55:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.648 04:55:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.648 04:55:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.648 04:55:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.648 04:55:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.648 04:55:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.648 04:55:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.648 04:55:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.648 04:55:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.907 04:55:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.907 04:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.907 04:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.907 04:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.907 04:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.907 04:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.907 04:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.907 04:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.907 04:55:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.907 04:55:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.907 04:55:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.907 04:55:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.907 04:55:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.166 04:55:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.544 [2024-07-24 04:55:19.134952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.804 [2024-07-24 04:55:19.354289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.804 [2024-07-24 04:55:19.354291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.063 [2024-07-24 04:55:19.578110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.063 [2024-07-24 04:55:19.578221] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.063 [2024-07-24 04:55:19.578237] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.439 spdk_app_start Round 2 00:06:06.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.439 04:55:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.439 04:55:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:06.439 04:55:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60676 /var/tmp/spdk-nbd.sock 00:06:06.439 04:55:20 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60676 ']' 00:06:06.439 04:55:20 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.439 04:55:20 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.439 04:55:20 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.439 04:55:20 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.439 04:55:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.439 04:55:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.439 04:55:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:06.439 04:55:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.698 Malloc0 00:06:06.698 04:55:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.957 Malloc1 00:06:07.216 04:55:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.216 /dev/nbd0 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.216 04:55:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:07.216 04:55:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:07.216 04:55:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:07.216 04:55:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:07.216 04:55:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:07.216 04:55:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:07.216 04:55:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:07.216 04:55:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:07.216 04:55:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.216 1+0 records in 00:06:07.216 1+0 records out 00:06:07.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312054 s, 13.1 MB/s 00:06:07.216 04:55:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.216 04:55:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:07.216 04:55:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.216 04:55:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:07.216 04:55:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.216 04:55:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.476 /dev/nbd1 00:06:07.476 04:55:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.476 04:55:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.476 04:55:22 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:07.476 04:55:22 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:07.476 04:55:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:07.476 04:55:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:07.476 04:55:22 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:07.476 04:55:22 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:07.476 04:55:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:07.476 04:55:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:07.476 04:55:22 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.476 1+0 records in 00:06:07.476 1+0 records out 00:06:07.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045683 s, 9.0 MB/s 00:06:07.476 04:55:22 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.476 04:55:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:07.476 04:55:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.476 04:55:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:07.476 04:55:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:07.476 04:55:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.476 04:55:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.476 04:55:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.476 04:55:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.476 04:55:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.735 04:55:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.735 { 00:06:07.735 "nbd_device": "/dev/nbd0", 00:06:07.735 "bdev_name": "Malloc0" 00:06:07.735 }, 00:06:07.735 { 00:06:07.735 "nbd_device": "/dev/nbd1", 00:06:07.735 "bdev_name": "Malloc1" 00:06:07.735 } 00:06:07.735 ]' 00:06:07.735 04:55:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.735 { 00:06:07.735 "nbd_device": "/dev/nbd0", 00:06:07.735 "bdev_name": "Malloc0" 00:06:07.735 }, 00:06:07.735 { 00:06:07.735 "nbd_device": "/dev/nbd1", 00:06:07.735 "bdev_name": "Malloc1" 00:06:07.735 } 00:06:07.735 ]' 00:06:07.735 04:55:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.994 /dev/nbd1' 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.994 /dev/nbd1' 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.994 256+0 records in 00:06:07.994 256+0 records out 00:06:07.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0053419 s, 196 MB/s 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.994 256+0 records in 00:06:07.994 256+0 records out 00:06:07.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234231 s, 44.8 MB/s 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.994 256+0 records in 00:06:07.994 256+0 records out 00:06:07.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0375605 s, 27.9 MB/s 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.994 04:55:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.995 04:55:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.995 04:55:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.995 04:55:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.995 04:55:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.995 04:55:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.995 04:55:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.254 04:55:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.254 04:55:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.254 04:55:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.254 04:55:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.254 04:55:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.254 04:55:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.254 04:55:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.254 04:55:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.254 04:55:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.254 04:55:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.513 04:55:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.513 04:55:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.513 04:55:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.513 04:55:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.513 04:55:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.513 04:55:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.513 04:55:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.513 04:55:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.513 04:55:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.513 04:55:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.513 04:55:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.772 04:55:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.772 04:55:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.772 04:55:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.772 04:55:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.772 04:55:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.772 04:55:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.772 04:55:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.772 04:55:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.772 04:55:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.772 04:55:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.772 04:55:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.772 04:55:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.772 04:55:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.031 04:55:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:10.407 [2024-07-24 04:55:24.922956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.666 [2024-07-24 04:55:25.131716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.666 [2024-07-24 04:55:25.131720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.925 [2024-07-24 04:55:25.362148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.925 [2024-07-24 04:55:25.362250] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.925 [2024-07-24 04:55:25.362269] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.302 04:55:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60676 /var/tmp/spdk-nbd.sock 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60676 ']' 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:12.302 04:55:26 event.app_repeat -- event/event.sh@39 -- # killprocess 60676 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60676 ']' 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60676 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60676 00:06:12.302 killing process with pid 60676 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60676' 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60676 00:06:12.302 04:55:26 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60676 00:06:13.679 spdk_app_start is called in Round 0. 00:06:13.679 Shutdown signal received, stop current app iteration 00:06:13.679 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 reinitialization... 00:06:13.679 spdk_app_start is called in Round 1. 00:06:13.679 Shutdown signal received, stop current app iteration 00:06:13.679 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 reinitialization... 00:06:13.679 spdk_app_start is called in Round 2. 00:06:13.679 Shutdown signal received, stop current app iteration 00:06:13.679 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 reinitialization... 00:06:13.679 spdk_app_start is called in Round 3. 00:06:13.679 Shutdown signal received, stop current app iteration 00:06:13.679 04:55:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:13.679 04:55:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:13.679 ************************************ 00:06:13.679 END TEST app_repeat 00:06:13.679 ************************************ 00:06:13.679 00:06:13.679 real 0m19.735s 00:06:13.679 user 0m41.008s 00:06:13.679 sys 0m2.997s 00:06:13.679 04:55:28 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.679 04:55:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.679 04:55:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:13.679 04:55:28 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:13.679 04:55:28 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.679 04:55:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.679 04:55:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.679 ************************************ 00:06:13.679 START TEST cpu_locks 00:06:13.679 ************************************ 00:06:13.679 04:55:28 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:13.679 * Looking for test storage... 00:06:13.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:13.679 04:55:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:13.679 04:55:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:13.680 04:55:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:13.680 04:55:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:13.680 04:55:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.680 04:55:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.680 04:55:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.680 ************************************ 00:06:13.680 START TEST default_locks 00:06:13.680 ************************************ 00:06:13.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.680 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:13.680 04:55:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61115 00:06:13.680 04:55:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.680 04:55:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61115 00:06:13.680 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61115 ']' 00:06:13.680 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.680 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.680 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.680 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.680 04:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.939 [2024-07-24 04:55:28.357041] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:13.939 [2024-07-24 04:55:28.357187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61115 ] 00:06:13.939 [2024-07-24 04:55:28.520017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.198 [2024-07-24 04:55:28.731915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.456 [2024-07-24 04:55:28.968073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.065 04:55:29 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.065 04:55:29 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:15.065 04:55:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61115 00:06:15.066 04:55:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61115 00:06:15.066 04:55:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.633 04:55:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61115 00:06:15.633 04:55:30 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 61115 ']' 00:06:15.633 04:55:30 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 61115 00:06:15.633 04:55:30 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:15.633 04:55:30 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.633 04:55:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61115 00:06:15.633 killing process with pid 61115 00:06:15.633 04:55:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.633 04:55:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.633 04:55:30 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61115' 00:06:15.633 04:55:30 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 61115 00:06:15.633 04:55:30 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 61115 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61115 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61115 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:18.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.184 ERROR: process (pid: 61115) is no longer running 00:06:18.184 ************************************ 00:06:18.184 END TEST default_locks 00:06:18.184 ************************************ 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 61115 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61115 ']' 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.184 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61115) - No such process 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:18.184 00:06:18.184 real 0m4.346s 00:06:18.184 user 0m4.412s 00:06:18.184 sys 0m0.684s 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.184 04:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.184 04:55:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:18.184 04:55:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.184 04:55:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.184 04:55:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.184 ************************************ 00:06:18.184 START TEST default_locks_via_rpc 00:06:18.184 ************************************ 00:06:18.184 04:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:18.184 04:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61196 00:06:18.184 04:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61196 00:06:18.185 04:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.185 04:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61196 ']' 00:06:18.185 04:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.185 04:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.185 04:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.185 04:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.185 04:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.185 [2024-07-24 04:55:32.761323] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:18.185 [2024-07-24 04:55:32.761631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61196 ] 00:06:18.444 [2024-07-24 04:55:32.922242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.703 [2024-07-24 04:55:33.139692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.961 [2024-07-24 04:55:33.358753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61196 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61196 00:06:19.529 04:55:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.097 04:55:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61196 00:06:20.097 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 61196 ']' 00:06:20.097 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 61196 00:06:20.097 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:20.097 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.097 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61196 00:06:20.097 killing process with pid 61196 00:06:20.097 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.097 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.097 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61196' 00:06:20.097 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 61196 00:06:20.097 04:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 61196 00:06:22.634 00:06:22.634 real 0m4.344s 00:06:22.634 user 0m4.355s 00:06:22.634 sys 0m0.734s 00:06:22.634 04:55:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.634 ************************************ 00:06:22.634 04:55:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.634 END TEST default_locks_via_rpc 00:06:22.634 ************************************ 00:06:22.634 04:55:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:22.634 04:55:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.634 04:55:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.634 04:55:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.634 ************************************ 00:06:22.634 START TEST non_locking_app_on_locked_coremask 00:06:22.634 ************************************ 00:06:22.634 04:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:22.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.635 04:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61272 00:06:22.635 04:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61272 /var/tmp/spdk.sock 00:06:22.635 04:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61272 ']' 00:06:22.635 04:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.635 04:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.635 04:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.635 04:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.635 04:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.635 04:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.635 [2024-07-24 04:55:37.211530] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:22.635 [2024-07-24 04:55:37.211712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61272 ] 00:06:22.894 [2024-07-24 04:55:37.395647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.153 [2024-07-24 04:55:37.609017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.412 [2024-07-24 04:55:37.840978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.981 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.981 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:23.981 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:23.981 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61293 00:06:23.981 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61293 /var/tmp/spdk2.sock 00:06:23.981 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61293 ']' 00:06:23.981 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.981 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.981 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.981 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.981 04:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.981 [2024-07-24 04:55:38.587655] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:23.981 [2024-07-24 04:55:38.588011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61293 ] 00:06:24.240 [2024-07-24 04:55:38.754516] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.240 [2024-07-24 04:55:38.754591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.807 [2024-07-24 04:55:39.186862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.066 [2024-07-24 04:55:39.650505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.971 04:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.971 04:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:26.971 04:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61272 00:06:26.971 04:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61272 00:06:26.971 04:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.906 04:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61272 00:06:27.906 04:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61272 ']' 00:06:27.906 04:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61272 00:06:27.906 04:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:27.906 04:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.906 04:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61272 00:06:27.906 killing process with pid 61272 00:06:27.906 04:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.906 04:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.906 04:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61272' 00:06:27.906 04:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61272 00:06:27.906 04:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61272 00:06:33.182 04:55:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61293 00:06:33.182 04:55:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61293 ']' 00:06:33.182 04:55:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61293 00:06:33.182 04:55:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:33.182 04:55:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.182 04:55:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61293 00:06:33.182 killing process with pid 61293 00:06:33.182 04:55:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.182 04:55:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.182 04:55:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61293' 00:06:33.182 04:55:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61293 00:06:33.182 04:55:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61293 00:06:35.085 00:06:35.085 real 0m12.497s 00:06:35.085 user 0m12.852s 00:06:35.085 sys 0m1.519s 00:06:35.085 ************************************ 00:06:35.085 END TEST non_locking_app_on_locked_coremask 00:06:35.085 ************************************ 00:06:35.085 04:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.085 04:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.085 04:55:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:35.085 04:55:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.085 04:55:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.085 04:55:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.085 ************************************ 00:06:35.085 START TEST locking_app_on_unlocked_coremask 00:06:35.085 ************************************ 00:06:35.085 04:55:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:35.085 04:55:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61451 00:06:35.085 04:55:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61451 /var/tmp/spdk.sock 00:06:35.085 04:55:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61451 ']' 00:06:35.085 04:55:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.085 04:55:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.085 04:55:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:35.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.085 04:55:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.085 04:55:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.085 04:55:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.344 [2024-07-24 04:55:49.775403] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:35.344 [2024-07-24 04:55:49.775597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61451 ] 00:06:35.344 [2024-07-24 04:55:49.953775] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.344 [2024-07-24 04:55:49.953948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.603 [2024-07-24 04:55:50.178434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.884 [2024-07-24 04:55:50.409691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.462 04:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.462 04:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:36.462 04:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61478 00:06:36.462 04:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.462 04:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61478 /var/tmp/spdk2.sock 00:06:36.462 04:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61478 ']' 00:06:36.462 04:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.462 04:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.462 04:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.462 04:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.462 04:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.722 [2024-07-24 04:55:51.207597] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:36.722 [2024-07-24 04:55:51.208074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61478 ] 00:06:36.981 [2024-07-24 04:55:51.393548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.240 [2024-07-24 04:55:51.830429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.809 [2024-07-24 04:55:52.289412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.189 04:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.189 04:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:39.189 04:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61478 00:06:39.189 04:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61478 00:06:39.189 04:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.568 04:55:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61451 00:06:40.568 04:55:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61451 ']' 00:06:40.568 04:55:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 61451 00:06:40.568 04:55:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:40.568 04:55:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.568 04:55:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61451 00:06:40.568 04:55:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.568 04:55:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.568 killing process with pid 61451 00:06:40.568 04:55:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61451' 00:06:40.568 04:55:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 61451 00:06:40.568 04:55:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 61451 00:06:45.845 04:55:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61478 00:06:45.845 04:55:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61478 ']' 00:06:45.845 04:55:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 61478 00:06:45.845 04:55:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:45.845 04:55:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.845 04:55:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61478 00:06:45.845 killing process with pid 61478 00:06:45.845 04:55:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.845 04:55:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.845 04:55:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61478' 00:06:45.845 04:55:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 61478 00:06:45.845 04:55:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 61478 00:06:47.752 00:06:47.752 real 0m12.551s 00:06:47.752 user 0m12.978s 00:06:47.752 sys 0m1.575s 00:06:47.752 04:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.752 ************************************ 00:06:47.752 END TEST locking_app_on_unlocked_coremask 00:06:47.752 ************************************ 00:06:47.752 04:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.752 04:56:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:47.752 04:56:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.752 04:56:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.752 04:56:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.752 ************************************ 00:06:47.752 START TEST locking_app_on_locked_coremask 00:06:47.752 ************************************ 00:06:47.752 04:56:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:47.752 04:56:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61631 00:06:47.752 04:56:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61631 /var/tmp/spdk.sock 00:06:47.752 04:56:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61631 ']' 00:06:47.752 04:56:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.752 04:56:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.752 04:56:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.752 04:56:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.752 04:56:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.752 04:56:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.012 [2024-07-24 04:56:02.389785] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:48.012 [2024-07-24 04:56:02.389975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61631 ] 00:06:48.012 [2024-07-24 04:56:02.572218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.271 [2024-07-24 04:56:02.784302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.530 [2024-07-24 04:56:03.010077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61653 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61653 /var/tmp/spdk2.sock 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61653 /var/tmp/spdk2.sock 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61653 /var/tmp/spdk2.sock 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61653 ']' 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.099 04:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.358 [2024-07-24 04:56:03.755571] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:49.358 [2024-07-24 04:56:03.755911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61653 ] 00:06:49.358 [2024-07-24 04:56:03.921440] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61631 has claimed it. 00:06:49.358 [2024-07-24 04:56:03.921520] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.927 ERROR: process (pid: 61653) is no longer running 00:06:49.927 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61653) - No such process 00:06:49.927 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.927 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:49.927 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:49.927 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:49.927 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:49.927 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:49.927 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61631 00:06:49.927 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61631 00:06:49.927 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.186 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61631 00:06:50.186 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61631 ']' 00:06:50.186 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61631 00:06:50.186 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:50.186 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.186 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61631 00:06:50.186 killing process with pid 61631 00:06:50.186 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.186 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.186 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61631' 00:06:50.186 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61631 00:06:50.186 04:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61631 00:06:52.812 ************************************ 00:06:52.812 END TEST locking_app_on_locked_coremask 00:06:52.812 ************************************ 00:06:52.812 00:06:52.812 real 0m4.975s 00:06:52.812 user 0m5.144s 00:06:52.812 sys 0m0.826s 00:06:52.812 04:56:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.812 04:56:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.812 04:56:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:52.812 04:56:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.812 04:56:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.812 04:56:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.812 ************************************ 00:06:52.812 START TEST locking_overlapped_coremask 00:06:52.812 ************************************ 00:06:52.812 04:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:52.812 04:56:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61717 00:06:52.812 04:56:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61717 /var/tmp/spdk.sock 00:06:52.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.812 04:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61717 ']' 00:06:52.812 04:56:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:52.812 04:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.812 04:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.812 04:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.813 04:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.813 04:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.813 [2024-07-24 04:56:07.430803] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:52.813 [2024-07-24 04:56:07.431016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61717 ] 00:06:53.075 [2024-07-24 04:56:07.609583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.335 [2024-07-24 04:56:07.831137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.335 [2024-07-24 04:56:07.831287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.335 [2024-07-24 04:56:07.831316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.593 [2024-07-24 04:56:08.074048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.159 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.159 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:54.159 04:56:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61741 00:06:54.159 04:56:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:54.159 04:56:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61741 /var/tmp/spdk2.sock 00:06:54.159 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:54.159 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61741 /var/tmp/spdk2.sock 00:06:54.159 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:54.159 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.159 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:54.160 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.160 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61741 /var/tmp/spdk2.sock 00:06:54.160 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61741 ']' 00:06:54.160 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.160 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.160 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.160 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.160 04:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.418 [2024-07-24 04:56:08.886722] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:54.418 [2024-07-24 04:56:08.887110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61741 ] 00:06:54.677 [2024-07-24 04:56:09.066940] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61717 has claimed it. 00:06:54.677 [2024-07-24 04:56:09.067004] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.937 ERROR: process (pid: 61741) is no longer running 00:06:54.937 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61741) - No such process 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61717 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 61717 ']' 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 61717 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61717 00:06:54.937 killing process with pid 61717 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61717' 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 61717 00:06:54.937 04:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 61717 00:06:57.475 00:06:57.475 real 0m4.704s 00:06:57.475 user 0m12.180s 00:06:57.475 sys 0m0.677s 00:06:57.475 04:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.475 ************************************ 00:06:57.475 END TEST locking_overlapped_coremask 00:06:57.475 ************************************ 00:06:57.475 04:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.475 04:56:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:57.475 04:56:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.475 04:56:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.475 04:56:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.475 ************************************ 00:06:57.475 START TEST locking_overlapped_coremask_via_rpc 00:06:57.475 ************************************ 00:06:57.475 04:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:57.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.475 04:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61810 00:06:57.475 04:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61810 /var/tmp/spdk.sock 00:06:57.475 04:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61810 ']' 00:06:57.475 04:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.475 04:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.475 04:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.475 04:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.475 04:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:57.475 04:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.735 [2024-07-24 04:56:12.149299] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:57.735 [2024-07-24 04:56:12.149437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61810 ] 00:06:57.735 [2024-07-24 04:56:12.306278] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.735 [2024-07-24 04:56:12.306323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.994 [2024-07-24 04:56:12.528197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.994 [2024-07-24 04:56:12.528314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.994 [2024-07-24 04:56:12.528336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.253 [2024-07-24 04:56:12.768316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.827 04:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.827 04:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:58.827 04:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61828 00:06:58.827 04:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:58.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.827 04:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61828 /var/tmp/spdk2.sock 00:06:58.827 04:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61828 ']' 00:06:58.827 04:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.827 04:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.827 04:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.827 04:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.827 04:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.091 [2024-07-24 04:56:13.585769] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:06:59.091 [2024-07-24 04:56:13.585965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61828 ] 00:06:59.350 [2024-07-24 04:56:13.766109] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.350 [2024-07-24 04:56:13.766180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.610 [2024-07-24 04:56:14.232029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.610 [2024-07-24 04:56:14.235709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.610 [2024-07-24 04:56:14.235745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:00.179 [2024-07-24 04:56:14.732287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.560 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.560 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:01.560 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:01.560 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.560 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.820 [2024-07-24 04:56:16.206744] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61810 has claimed it. 00:07:01.820 request: 00:07:01.820 { 00:07:01.820 "method": "framework_enable_cpumask_locks", 00:07:01.820 "req_id": 1 00:07:01.820 } 00:07:01.820 Got JSON-RPC error response 00:07:01.820 response: 00:07:01.820 { 00:07:01.820 "code": -32603, 00:07:01.820 "message": "Failed to claim CPU core: 2" 00:07:01.820 } 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61810 /var/tmp/spdk.sock 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61810 ']' 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61828 /var/tmp/spdk2.sock 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61828 ']' 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.820 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.080 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.080 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:02.080 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:02.080 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.080 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.080 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.080 00:07:02.080 real 0m4.533s 00:07:02.080 user 0m1.247s 00:07:02.080 sys 0m0.265s 00:07:02.080 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.080 04:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.080 ************************************ 00:07:02.080 END TEST locking_overlapped_coremask_via_rpc 00:07:02.080 ************************************ 00:07:02.080 04:56:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:02.080 04:56:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61810 ]] 00:07:02.080 04:56:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61810 00:07:02.080 04:56:16 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61810 ']' 00:07:02.080 04:56:16 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61810 00:07:02.080 04:56:16 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:02.080 04:56:16 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.080 04:56:16 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61810 00:07:02.080 killing process with pid 61810 00:07:02.080 04:56:16 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.080 04:56:16 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.080 04:56:16 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61810' 00:07:02.080 04:56:16 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61810 00:07:02.080 04:56:16 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61810 00:07:05.373 04:56:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61828 ]] 00:07:05.373 04:56:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61828 00:07:05.373 04:56:19 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61828 ']' 00:07:05.373 04:56:19 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61828 00:07:05.373 04:56:19 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:05.373 04:56:19 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.373 04:56:19 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61828 00:07:05.373 killing process with pid 61828 00:07:05.373 04:56:19 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:05.373 04:56:19 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:05.373 04:56:19 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61828' 00:07:05.373 04:56:19 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61828 00:07:05.373 04:56:19 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61828 00:07:07.277 04:56:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.278 Process with pid 61810 is not found 00:07:07.278 04:56:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:07.278 04:56:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61810 ]] 00:07:07.278 04:56:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61810 00:07:07.278 04:56:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61810 ']' 00:07:07.278 04:56:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61810 00:07:07.278 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61810) - No such process 00:07:07.278 04:56:21 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61810 is not found' 00:07:07.278 04:56:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61828 ]] 00:07:07.278 04:56:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61828 00:07:07.278 04:56:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61828 ']' 00:07:07.278 04:56:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61828 00:07:07.278 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61828) - No such process 00:07:07.278 Process with pid 61828 is not found 00:07:07.278 04:56:21 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61828 is not found' 00:07:07.278 04:56:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.278 00:07:07.278 real 0m53.693s 00:07:07.278 user 1m29.585s 00:07:07.278 sys 0m7.389s 00:07:07.278 04:56:21 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.278 ************************************ 00:07:07.278 END TEST cpu_locks 00:07:07.278 ************************************ 00:07:07.278 04:56:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.278 ************************************ 00:07:07.278 END TEST event 00:07:07.278 ************************************ 00:07:07.278 00:07:07.278 real 1m24.941s 00:07:07.278 user 2m27.444s 00:07:07.278 sys 0m11.584s 00:07:07.278 04:56:21 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.278 04:56:21 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.537 04:56:21 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:07.537 04:56:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.537 04:56:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.537 04:56:21 -- common/autotest_common.sh@10 -- # set +x 00:07:07.537 ************************************ 00:07:07.537 START TEST thread 00:07:07.537 ************************************ 00:07:07.537 04:56:21 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:07.537 * Looking for test storage... 00:07:07.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:07.537 04:56:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.537 04:56:22 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:07.537 04:56:22 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.537 04:56:22 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.537 ************************************ 00:07:07.537 START TEST thread_poller_perf 00:07:07.537 ************************************ 00:07:07.537 04:56:22 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.537 [2024-07-24 04:56:22.119291] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:07.537 [2024-07-24 04:56:22.120188] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62021 ] 00:07:07.796 [2024-07-24 04:56:22.308076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.121 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:08.121 [2024-07-24 04:56:22.627405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.500 ====================================== 00:07:09.500 busy:2108906372 (cyc) 00:07:09.500 total_run_count: 408000 00:07:09.500 tsc_hz: 2100000000 (cyc) 00:07:09.500 ====================================== 00:07:09.500 poller_cost: 5168 (cyc), 2460 (nsec) 00:07:09.500 ************************************ 00:07:09.500 END TEST thread_poller_perf 00:07:09.500 ************************************ 00:07:09.500 00:07:09.500 real 0m1.986s 00:07:09.500 user 0m1.737s 00:07:09.500 sys 0m0.139s 00:07:09.500 04:56:24 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.500 04:56:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:09.500 04:56:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:09.500 04:56:24 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:09.500 04:56:24 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.500 04:56:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.500 ************************************ 00:07:09.500 START TEST thread_poller_perf 00:07:09.500 ************************************ 00:07:09.500 04:56:24 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:09.759 [2024-07-24 04:56:24.165909] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:09.759 [2024-07-24 04:56:24.166054] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62063 ] 00:07:09.759 [2024-07-24 04:56:24.347806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.018 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:10.018 [2024-07-24 04:56:24.561688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.398 ====================================== 00:07:11.398 busy:2103302148 (cyc) 00:07:11.398 total_run_count: 5346000 00:07:11.398 tsc_hz: 2100000000 (cyc) 00:07:11.398 ====================================== 00:07:11.398 poller_cost: 393 (cyc), 187 (nsec) 00:07:11.398 00:07:11.398 real 0m1.870s 00:07:11.398 user 0m1.636s 00:07:11.398 sys 0m0.127s 00:07:11.398 ************************************ 00:07:11.398 END TEST thread_poller_perf 00:07:11.398 ************************************ 00:07:11.398 04:56:25 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.398 04:56:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:11.657 04:56:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:11.657 ************************************ 00:07:11.657 END TEST thread 00:07:11.657 ************************************ 00:07:11.657 00:07:11.657 real 0m4.085s 00:07:11.657 user 0m3.453s 00:07:11.657 sys 0m0.414s 00:07:11.657 04:56:26 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.657 04:56:26 thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.657 04:56:26 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:11.657 04:56:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.657 04:56:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.657 04:56:26 -- common/autotest_common.sh@10 -- # set +x 00:07:11.657 ************************************ 00:07:11.657 START TEST accel 00:07:11.657 ************************************ 00:07:11.657 04:56:26 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:11.657 * Looking for test storage... 00:07:11.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:11.657 04:56:26 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:11.657 04:56:26 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:11.657 04:56:26 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:11.657 04:56:26 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=62144 00:07:11.657 04:56:26 accel -- accel/accel.sh@63 -- # waitforlisten 62144 00:07:11.657 04:56:26 accel -- common/autotest_common.sh@829 -- # '[' -z 62144 ']' 00:07:11.657 04:56:26 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.657 04:56:26 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.657 04:56:26 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.657 04:56:26 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.657 04:56:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.657 04:56:26 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:11.657 04:56:26 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:11.657 04:56:26 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.657 04:56:26 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.657 04:56:26 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.657 04:56:26 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.657 04:56:26 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.657 04:56:26 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:11.657 04:56:26 accel -- accel/accel.sh@41 -- # jq -r . 00:07:11.916 [2024-07-24 04:56:26.299152] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:11.917 [2024-07-24 04:56:26.299274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62144 ] 00:07:11.917 [2024-07-24 04:56:26.460047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.176 [2024-07-24 04:56:26.674072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.435 [2024-07-24 04:56:26.911858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.005 04:56:27 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.005 04:56:27 accel -- common/autotest_common.sh@862 -- # return 0 00:07:13.005 04:56:27 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:13.005 04:56:27 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:13.005 04:56:27 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:13.005 04:56:27 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:13.005 04:56:27 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:13.005 04:56:27 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:13.005 04:56:27 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.005 04:56:27 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:13.005 04:56:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.005 04:56:27 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.005 04:56:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.005 04:56:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.005 04:56:27 accel -- accel/accel.sh@75 -- # killprocess 62144 00:07:13.005 04:56:27 accel -- common/autotest_common.sh@948 -- # '[' -z 62144 ']' 00:07:13.005 04:56:27 accel -- common/autotest_common.sh@952 -- # kill -0 62144 00:07:13.005 04:56:27 accel -- common/autotest_common.sh@953 -- # uname 00:07:13.005 04:56:27 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.005 04:56:27 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62144 00:07:13.264 04:56:27 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.264 04:56:27 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.264 04:56:27 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62144' 00:07:13.264 killing process with pid 62144 00:07:13.264 04:56:27 accel -- common/autotest_common.sh@967 -- # kill 62144 00:07:13.264 04:56:27 accel -- common/autotest_common.sh@972 -- # wait 62144 00:07:15.804 04:56:30 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:15.804 04:56:30 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:15.804 04:56:30 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:15.804 04:56:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.804 04:56:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.804 04:56:30 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:15.804 04:56:30 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:15.804 04:56:30 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:15.804 04:56:30 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.804 04:56:30 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.804 04:56:30 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.804 04:56:30 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.804 04:56:30 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.804 04:56:30 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:15.804 04:56:30 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:15.804 04:56:30 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.804 04:56:30 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:15.804 04:56:30 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:15.804 04:56:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:15.804 04:56:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.804 04:56:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.804 ************************************ 00:07:15.804 START TEST accel_missing_filename 00:07:15.804 ************************************ 00:07:15.804 04:56:30 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:15.804 04:56:30 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:15.804 04:56:30 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:15.804 04:56:30 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:15.804 04:56:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.804 04:56:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:15.804 04:56:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.804 04:56:30 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:15.804 04:56:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:15.804 04:56:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:15.804 04:56:30 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.804 04:56:30 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.804 04:56:30 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.804 04:56:30 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.804 04:56:30 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.804 04:56:30 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:15.804 04:56:30 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:15.804 [2024-07-24 04:56:30.286530] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:15.804 [2024-07-24 04:56:30.286650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62225 ] 00:07:16.064 [2024-07-24 04:56:30.444990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.064 [2024-07-24 04:56:30.658097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.324 [2024-07-24 04:56:30.889931] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.892 [2024-07-24 04:56:31.440126] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:17.460 A filename is required. 00:07:17.460 04:56:31 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:17.460 04:56:31 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.460 04:56:31 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:17.460 ************************************ 00:07:17.460 END TEST accel_missing_filename 00:07:17.460 ************************************ 00:07:17.460 04:56:31 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:17.460 04:56:31 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:17.460 04:56:31 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.460 00:07:17.460 real 0m1.622s 00:07:17.460 user 0m1.378s 00:07:17.460 sys 0m0.181s 00:07:17.460 04:56:31 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.460 04:56:31 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:17.460 04:56:31 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:17.460 04:56:31 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:17.460 04:56:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.460 04:56:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.460 ************************************ 00:07:17.460 START TEST accel_compress_verify 00:07:17.460 ************************************ 00:07:17.460 04:56:31 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:17.460 04:56:31 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:17.460 04:56:31 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:17.460 04:56:31 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:17.460 04:56:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.460 04:56:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:17.460 04:56:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.460 04:56:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:17.460 04:56:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:17.460 04:56:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:17.460 04:56:31 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.460 04:56:31 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.460 04:56:31 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.460 04:56:31 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.460 04:56:31 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.460 04:56:31 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:17.460 04:56:31 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:17.460 [2024-07-24 04:56:31.990492] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:17.460 [2024-07-24 04:56:31.990673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62256 ] 00:07:17.720 [2024-07-24 04:56:32.168907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.979 [2024-07-24 04:56:32.381429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.979 [2024-07-24 04:56:32.607754] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.548 [2024-07-24 04:56:33.152646] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:19.117 00:07:19.117 Compression does not support the verify option, aborting. 00:07:19.117 04:56:33 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:19.117 04:56:33 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.117 04:56:33 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:19.117 04:56:33 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:19.117 04:56:33 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:19.117 04:56:33 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.117 00:07:19.117 real 0m1.654s 00:07:19.117 user 0m1.401s 00:07:19.117 sys 0m0.187s 00:07:19.117 04:56:33 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.117 04:56:33 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:19.117 ************************************ 00:07:19.117 END TEST accel_compress_verify 00:07:19.117 ************************************ 00:07:19.117 04:56:33 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:19.117 04:56:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:19.117 04:56:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.117 04:56:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.117 ************************************ 00:07:19.118 START TEST accel_wrong_workload 00:07:19.118 ************************************ 00:07:19.118 04:56:33 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:19.118 04:56:33 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:19.118 04:56:33 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:19.118 04:56:33 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:19.118 04:56:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.118 04:56:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:19.118 04:56:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.118 04:56:33 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:19.118 04:56:33 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:19.118 04:56:33 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:19.118 04:56:33 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.118 04:56:33 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.118 04:56:33 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.118 04:56:33 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.118 04:56:33 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.118 04:56:33 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:19.118 04:56:33 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:19.118 Unsupported workload type: foobar 00:07:19.118 [2024-07-24 04:56:33.695968] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:19.118 accel_perf options: 00:07:19.118 [-h help message] 00:07:19.118 [-q queue depth per core] 00:07:19.118 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:19.118 [-T number of threads per core 00:07:19.118 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:19.118 [-t time in seconds] 00:07:19.118 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:19.118 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:19.118 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:19.118 [-l for compress/decompress workloads, name of uncompressed input file 00:07:19.118 [-S for crc32c workload, use this seed value (default 0) 00:07:19.118 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:19.118 [-f for fill workload, use this BYTE value (default 255) 00:07:19.118 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:19.118 [-y verify result if this switch is on] 00:07:19.118 [-a tasks to allocate per core (default: same value as -q)] 00:07:19.118 Can be used to spread operations across a wider range of memory. 00:07:19.118 04:56:33 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:19.118 04:56:33 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.118 ************************************ 00:07:19.118 END TEST accel_wrong_workload 00:07:19.118 ************************************ 00:07:19.118 04:56:33 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.118 04:56:33 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.118 00:07:19.118 real 0m0.089s 00:07:19.118 user 0m0.082s 00:07:19.118 sys 0m0.047s 00:07:19.118 04:56:33 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.118 04:56:33 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:19.379 04:56:33 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:19.379 04:56:33 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:19.379 04:56:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.379 04:56:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.379 ************************************ 00:07:19.379 START TEST accel_negative_buffers 00:07:19.379 ************************************ 00:07:19.379 04:56:33 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:19.379 04:56:33 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:19.379 04:56:33 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:19.379 04:56:33 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:19.379 04:56:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.379 04:56:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:19.379 04:56:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.379 04:56:33 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:19.379 04:56:33 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:19.379 04:56:33 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:19.379 04:56:33 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.379 04:56:33 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.379 04:56:33 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.379 04:56:33 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.379 04:56:33 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.379 04:56:33 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:19.379 04:56:33 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:19.379 -x option must be non-negative. 00:07:19.379 [2024-07-24 04:56:33.847707] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:19.379 accel_perf options: 00:07:19.379 [-h help message] 00:07:19.379 [-q queue depth per core] 00:07:19.379 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:19.379 [-T number of threads per core 00:07:19.379 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:19.379 [-t time in seconds] 00:07:19.379 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:19.379 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:19.379 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:19.379 [-l for compress/decompress workloads, name of uncompressed input file 00:07:19.379 [-S for crc32c workload, use this seed value (default 0) 00:07:19.379 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:19.379 [-f for fill workload, use this BYTE value (default 255) 00:07:19.379 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:19.379 [-y verify result if this switch is on] 00:07:19.379 [-a tasks to allocate per core (default: same value as -q)] 00:07:19.379 Can be used to spread operations across a wider range of memory. 00:07:19.379 04:56:33 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:19.379 04:56:33 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.379 04:56:33 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.379 ************************************ 00:07:19.379 END TEST accel_negative_buffers 00:07:19.379 ************************************ 00:07:19.379 04:56:33 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.379 00:07:19.379 real 0m0.095s 00:07:19.379 user 0m0.075s 00:07:19.379 sys 0m0.063s 00:07:19.379 04:56:33 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.379 04:56:33 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:19.379 04:56:33 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:19.379 04:56:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:19.379 04:56:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.379 04:56:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.379 ************************************ 00:07:19.379 START TEST accel_crc32c 00:07:19.379 ************************************ 00:07:19.379 04:56:33 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:19.379 04:56:33 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:19.379 04:56:33 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:19.379 04:56:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.379 04:56:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.379 04:56:33 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:19.379 04:56:33 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:19.379 04:56:33 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:19.379 04:56:33 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.379 04:56:33 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.379 04:56:33 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.379 04:56:33 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.379 04:56:33 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.379 04:56:33 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:19.379 04:56:33 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:19.379 [2024-07-24 04:56:33.997018] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:19.379 [2024-07-24 04:56:33.997319] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62334 ] 00:07:19.639 [2024-07-24 04:56:34.179888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.898 [2024-07-24 04:56:34.400711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.158 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.159 04:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:22.094 04:56:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.094 00:07:22.094 real 0m2.673s 00:07:22.094 user 0m2.382s 00:07:22.094 sys 0m0.197s 00:07:22.094 04:56:36 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.094 ************************************ 00:07:22.094 END TEST accel_crc32c 00:07:22.094 ************************************ 00:07:22.094 04:56:36 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:22.094 04:56:36 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:22.094 04:56:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:22.094 04:56:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.094 04:56:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.094 ************************************ 00:07:22.094 START TEST accel_crc32c_C2 00:07:22.094 ************************************ 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:22.094 04:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:22.094 [2024-07-24 04:56:36.710187] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:22.094 [2024-07-24 04:56:36.710294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62386 ] 00:07:22.353 [2024-07-24 04:56:36.868863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.612 [2024-07-24 04:56:37.080567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.871 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.872 04:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.774 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.774 ************************************ 00:07:24.774 END TEST accel_crc32c_C2 00:07:24.775 ************************************ 00:07:24.775 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:24.775 04:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.775 00:07:24.775 real 0m2.624s 00:07:24.775 user 0m2.359s 00:07:24.775 sys 0m0.174s 00:07:24.775 04:56:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.775 04:56:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:24.775 04:56:39 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:24.775 04:56:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:24.775 04:56:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.775 04:56:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.775 ************************************ 00:07:24.775 START TEST accel_copy 00:07:24.775 ************************************ 00:07:24.775 04:56:39 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:24.775 04:56:39 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:24.775 04:56:39 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:24.775 04:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.775 04:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.775 04:56:39 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:24.775 04:56:39 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:24.775 04:56:39 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:24.775 04:56:39 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.775 04:56:39 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.775 04:56:39 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.775 04:56:39 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.775 04:56:39 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.775 04:56:39 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:24.775 04:56:39 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:25.034 [2024-07-24 04:56:39.414584] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:25.034 [2024-07-24 04:56:39.414741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62433 ] 00:07:25.034 [2024-07-24 04:56:39.596241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.293 [2024-07-24 04:56:39.805814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.552 04:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:27.458 04:56:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.458 00:07:27.458 real 0m2.675s 00:07:27.458 user 0m2.397s 00:07:27.458 sys 0m0.187s 00:07:27.458 04:56:42 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.458 ************************************ 00:07:27.458 END TEST accel_copy 00:07:27.458 ************************************ 00:07:27.458 04:56:42 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:27.458 04:56:42 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.458 04:56:42 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:27.458 04:56:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.458 04:56:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.717 ************************************ 00:07:27.717 START TEST accel_fill 00:07:27.717 ************************************ 00:07:27.717 04:56:42 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.717 04:56:42 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:27.717 04:56:42 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:27.717 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.717 04:56:42 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.717 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.717 04:56:42 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.717 04:56:42 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:27.717 04:56:42 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.717 04:56:42 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.717 04:56:42 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.718 04:56:42 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.718 04:56:42 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.718 04:56:42 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:27.718 04:56:42 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:27.718 [2024-07-24 04:56:42.149894] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:27.718 [2024-07-24 04:56:42.150049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62479 ] 00:07:27.718 [2024-07-24 04:56:42.324574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.976 [2024-07-24 04:56:42.532998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.235 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.236 04:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:30.141 04:56:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.141 00:07:30.141 real 0m2.655s 00:07:30.141 user 0m0.014s 00:07:30.141 sys 0m0.004s 00:07:30.141 04:56:44 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.141 ************************************ 00:07:30.141 END TEST accel_fill 00:07:30.141 ************************************ 00:07:30.141 04:56:44 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:30.401 04:56:44 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:30.401 04:56:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:30.401 04:56:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.401 04:56:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.401 ************************************ 00:07:30.401 START TEST accel_copy_crc32c 00:07:30.401 ************************************ 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:30.401 04:56:44 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:30.401 [2024-07-24 04:56:44.852351] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:30.401 [2024-07-24 04:56:44.852457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62526 ] 00:07:30.401 [2024-07-24 04:56:45.015352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.660 [2024-07-24 04:56:45.231190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.936 04:56:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.868 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.869 00:07:32.869 real 0m2.638s 00:07:32.869 user 0m0.016s 00:07:32.869 sys 0m0.005s 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.869 04:56:47 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:32.869 ************************************ 00:07:32.869 END TEST accel_copy_crc32c 00:07:32.869 ************************************ 00:07:32.869 04:56:47 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:32.869 04:56:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:32.869 04:56:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.869 04:56:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.869 ************************************ 00:07:32.869 START TEST accel_copy_crc32c_C2 00:07:32.869 ************************************ 00:07:32.869 04:56:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:33.128 04:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.128 04:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:33.128 04:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.128 04:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.128 04:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:33.128 04:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:33.128 04:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.128 04:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.128 04:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.128 04:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.128 04:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.128 04:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.129 04:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:33.129 04:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:33.129 [2024-07-24 04:56:47.558785] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:33.129 [2024-07-24 04:56:47.558936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62578 ] 00:07:33.129 [2024-07-24 04:56:47.743731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.388 [2024-07-24 04:56:47.967191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.647 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 04:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.552 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.553 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:35.553 04:56:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.553 00:07:35.553 real 0m2.682s 00:07:35.553 user 0m2.395s 00:07:35.553 sys 0m0.189s 00:07:35.553 ************************************ 00:07:35.553 END TEST accel_copy_crc32c_C2 00:07:35.553 ************************************ 00:07:35.553 04:56:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.553 04:56:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:35.812 04:56:50 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:35.812 04:56:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:35.812 04:56:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.812 04:56:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.812 ************************************ 00:07:35.812 START TEST accel_dualcast 00:07:35.812 ************************************ 00:07:35.812 04:56:50 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:35.812 04:56:50 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:35.812 04:56:50 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:35.812 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.812 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.812 04:56:50 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:35.812 04:56:50 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:35.812 04:56:50 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:35.812 04:56:50 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.812 04:56:50 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.812 04:56:50 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.812 04:56:50 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.812 04:56:50 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.812 04:56:50 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:35.812 04:56:50 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:35.812 [2024-07-24 04:56:50.298117] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:35.812 [2024-07-24 04:56:50.298309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62624 ] 00:07:36.071 [2024-07-24 04:56:50.481266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.331 [2024-07-24 04:56:50.706146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.331 04:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.864 04:56:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.865 04:56:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.865 04:56:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.865 04:56:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.865 04:56:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.865 04:56:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.865 04:56:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.865 04:56:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.865 04:56:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:38.865 04:56:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.865 00:07:38.865 real 0m2.677s 00:07:38.865 user 0m2.390s 00:07:38.865 sys 0m0.195s 00:07:38.865 04:56:52 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.865 04:56:52 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:38.865 ************************************ 00:07:38.865 END TEST accel_dualcast 00:07:38.865 ************************************ 00:07:38.865 04:56:52 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:38.865 04:56:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:38.865 04:56:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.865 04:56:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.865 ************************************ 00:07:38.865 START TEST accel_compare 00:07:38.865 ************************************ 00:07:38.865 04:56:52 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:38.865 04:56:52 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:38.865 04:56:52 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:38.865 04:56:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.865 04:56:52 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:38.865 04:56:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.865 04:56:52 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:38.865 04:56:52 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:38.865 04:56:52 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.865 04:56:52 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.865 04:56:52 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.865 04:56:52 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.865 04:56:52 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.865 04:56:52 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:38.865 04:56:52 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:38.865 [2024-07-24 04:56:53.016281] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:38.865 [2024-07-24 04:56:53.016399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62671 ] 00:07:38.865 [2024-07-24 04:56:53.173916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.865 [2024-07-24 04:56:53.393430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.123 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.124 04:56:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:41.027 04:56:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.027 00:07:41.027 real 0m2.627s 00:07:41.027 user 0m2.360s 00:07:41.027 sys 0m0.172s 00:07:41.027 ************************************ 00:07:41.027 END TEST accel_compare 00:07:41.027 ************************************ 00:07:41.027 04:56:55 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.027 04:56:55 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:41.027 04:56:55 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:41.027 04:56:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:41.027 04:56:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.027 04:56:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.027 ************************************ 00:07:41.027 START TEST accel_xor 00:07:41.027 ************************************ 00:07:41.027 04:56:55 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:41.027 04:56:55 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:41.027 04:56:55 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:41.027 04:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.027 04:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.286 04:56:55 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:41.286 04:56:55 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:41.286 04:56:55 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:41.286 04:56:55 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.286 04:56:55 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.286 04:56:55 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.286 04:56:55 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.286 04:56:55 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.286 04:56:55 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:41.286 04:56:55 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:41.286 [2024-07-24 04:56:55.716060] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:41.286 [2024-07-24 04:56:55.716215] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62723 ] 00:07:41.286 [2024-07-24 04:56:55.896120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.545 [2024-07-24 04:56:56.111045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.804 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.806 04:56:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:43.711 ************************************ 00:07:43.711 END TEST accel_xor 00:07:43.711 ************************************ 00:07:43.711 04:56:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.711 00:07:43.711 real 0m2.669s 00:07:43.711 user 0m2.385s 00:07:43.711 sys 0m0.193s 00:07:43.711 04:56:58 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.711 04:56:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:43.971 04:56:58 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:43.971 04:56:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:43.971 04:56:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.971 04:56:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.971 ************************************ 00:07:43.971 START TEST accel_xor 00:07:43.971 ************************************ 00:07:43.971 04:56:58 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:43.971 04:56:58 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:43.971 04:56:58 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:43.971 04:56:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.971 04:56:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.971 04:56:58 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:43.971 04:56:58 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:43.971 04:56:58 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:43.971 04:56:58 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.971 04:56:58 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.971 04:56:58 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.971 04:56:58 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.971 04:56:58 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.971 04:56:58 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:43.971 04:56:58 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:43.971 [2024-07-24 04:56:58.441959] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:43.971 [2024-07-24 04:56:58.442114] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62764 ] 00:07:44.230 [2024-07-24 04:56:58.620023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.230 [2024-07-24 04:56:58.833035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.489 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.490 04:56:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:47.026 04:57:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.026 00:07:47.026 real 0m2.662s 00:07:47.026 user 0m0.016s 00:07:47.026 sys 0m0.006s 00:07:47.026 04:57:01 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.026 04:57:01 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:47.026 ************************************ 00:07:47.026 END TEST accel_xor 00:07:47.026 ************************************ 00:07:47.026 04:57:01 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:47.026 04:57:01 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:47.026 04:57:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.026 04:57:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.026 ************************************ 00:07:47.026 START TEST accel_dif_verify 00:07:47.026 ************************************ 00:07:47.026 04:57:01 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:47.026 04:57:01 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:47.026 04:57:01 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:47.026 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.026 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.026 04:57:01 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:47.026 04:57:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:47.026 04:57:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:47.026 04:57:01 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.026 04:57:01 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.027 04:57:01 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.027 04:57:01 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.027 04:57:01 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.027 04:57:01 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:47.027 04:57:01 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:47.027 [2024-07-24 04:57:01.164174] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:47.027 [2024-07-24 04:57:01.164330] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62816 ] 00:07:47.027 [2024-07-24 04:57:01.347318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.027 [2024-07-24 04:57:01.557930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:47.292 04:57:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:49.209 04:57:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.209 00:07:49.209 real 0m2.659s 00:07:49.209 user 0m2.372s 00:07:49.209 sys 0m0.198s 00:07:49.209 04:57:03 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.209 ************************************ 00:07:49.209 END TEST accel_dif_verify 00:07:49.209 ************************************ 00:07:49.209 04:57:03 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:49.209 04:57:03 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:49.209 04:57:03 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:49.209 04:57:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.209 04:57:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.209 ************************************ 00:07:49.209 START TEST accel_dif_generate 00:07:49.209 ************************************ 00:07:49.209 04:57:03 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:49.209 04:57:03 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:49.209 04:57:03 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:49.209 04:57:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.210 04:57:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.210 04:57:03 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:49.210 04:57:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:49.210 04:57:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:49.210 04:57:03 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.210 04:57:03 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.210 04:57:03 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.210 04:57:03 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.210 04:57:03 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.210 04:57:03 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:49.210 04:57:03 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:49.468 [2024-07-24 04:57:03.887583] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:49.468 [2024-07-24 04:57:03.887734] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62863 ] 00:07:49.468 [2024-07-24 04:57:04.068432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.727 [2024-07-24 04:57:04.283569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.986 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:49.987 04:57:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:51.892 04:57:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:51.892 04:57:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:51.892 04:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:51.892 04:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:51.892 04:57:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:51.892 04:57:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:51.892 04:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:51.892 04:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:51.892 04:57:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:51.892 04:57:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:51.893 04:57:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.893 00:07:51.893 real 0m2.659s 00:07:51.893 user 0m2.382s 00:07:51.893 sys 0m0.187s 00:07:51.893 ************************************ 00:07:51.893 END TEST accel_dif_generate 00:07:51.893 ************************************ 00:07:51.893 04:57:06 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.893 04:57:06 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:52.152 04:57:06 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:52.152 04:57:06 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:52.152 04:57:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.152 04:57:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.152 ************************************ 00:07:52.152 START TEST accel_dif_generate_copy 00:07:52.152 ************************************ 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:52.152 04:57:06 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:52.152 [2024-07-24 04:57:06.604397] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:52.152 [2024-07-24 04:57:06.604582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62909 ] 00:07:52.411 [2024-07-24 04:57:06.788009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.411 [2024-07-24 04:57:07.006691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.670 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:52.670 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.670 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.670 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.670 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:52.670 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.670 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.670 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.670 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:52.670 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.670 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.671 04:57:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.586 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.846 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.846 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:54.846 04:57:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.846 ************************************ 00:07:54.846 END TEST accel_dif_generate_copy 00:07:54.846 ************************************ 00:07:54.846 00:07:54.846 real 0m2.675s 00:07:54.846 user 0m0.012s 00:07:54.846 sys 0m0.006s 00:07:54.846 04:57:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.846 04:57:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:54.846 04:57:09 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:54.846 04:57:09 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.846 04:57:09 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:54.846 04:57:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.846 04:57:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.846 ************************************ 00:07:54.846 START TEST accel_comp 00:07:54.846 ************************************ 00:07:54.846 04:57:09 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.846 04:57:09 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:54.846 04:57:09 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:54.846 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:54.846 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:54.846 04:57:09 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.846 04:57:09 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.846 04:57:09 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:54.846 04:57:09 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.846 04:57:09 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.846 04:57:09 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.846 04:57:09 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.846 04:57:09 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.846 04:57:09 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:54.846 04:57:09 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:54.846 [2024-07-24 04:57:09.342214] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:54.846 [2024-07-24 04:57:09.342370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62956 ] 00:07:55.106 [2024-07-24 04:57:09.524177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.106 [2024-07-24 04:57:09.735435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:55.365 04:57:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:57.900 04:57:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.901 04:57:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:57.901 04:57:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.901 04:57:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.901 04:57:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:57.901 04:57:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.901 00:07:57.901 real 0m2.673s 00:07:57.901 user 0m0.017s 00:07:57.901 sys 0m0.004s 00:07:57.901 04:57:11 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.901 ************************************ 00:07:57.901 END TEST accel_comp 00:07:57.901 ************************************ 00:07:57.901 04:57:11 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:57.901 04:57:12 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:57.901 04:57:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:57.901 04:57:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.901 04:57:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.901 ************************************ 00:07:57.901 START TEST accel_decomp 00:07:57.901 ************************************ 00:07:57.901 04:57:12 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:57.901 04:57:12 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:57.901 04:57:12 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:57.901 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.901 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.901 04:57:12 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:57.901 04:57:12 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:57.901 04:57:12 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:57.901 04:57:12 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.901 04:57:12 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.901 04:57:12 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.901 04:57:12 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.901 04:57:12 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.901 04:57:12 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:57.901 04:57:12 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:57.901 [2024-07-24 04:57:12.074443] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:07:57.901 [2024-07-24 04:57:12.074616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63003 ] 00:07:57.901 [2024-07-24 04:57:12.256196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.901 [2024-07-24 04:57:12.469150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.160 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:58.160 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.160 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:58.161 04:57:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:00.066 04:57:14 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.066 00:08:00.066 real 0m2.661s 00:08:00.066 user 0m2.366s 00:08:00.066 sys 0m0.200s 00:08:00.066 04:57:14 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.066 04:57:14 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:00.066 ************************************ 00:08:00.066 END TEST accel_decomp 00:08:00.066 ************************************ 00:08:00.326 04:57:14 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:00.326 04:57:14 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:00.326 04:57:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.326 04:57:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.326 ************************************ 00:08:00.326 START TEST accel_decomp_full 00:08:00.326 ************************************ 00:08:00.326 04:57:14 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:00.326 04:57:14 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:00.326 04:57:14 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:00.326 04:57:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.326 04:57:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.326 04:57:14 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:00.326 04:57:14 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:00.326 04:57:14 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:00.326 04:57:14 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.326 04:57:14 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.326 04:57:14 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.326 04:57:14 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.326 04:57:14 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.326 04:57:14 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:00.326 04:57:14 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:00.326 [2024-07-24 04:57:14.797838] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:00.326 [2024-07-24 04:57:14.797995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63053 ] 00:08:00.585 [2024-07-24 04:57:14.980009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.585 [2024-07-24 04:57:15.195432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.844 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:00.844 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.844 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.844 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.844 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:00.844 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.844 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.844 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.844 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:00.844 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.844 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.844 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.844 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:00.844 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.845 04:57:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.379 04:57:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.379 04:57:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.379 04:57:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.379 04:57:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.379 04:57:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:03.380 04:57:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.380 ************************************ 00:08:03.380 END TEST accel_decomp_full 00:08:03.380 ************************************ 00:08:03.380 00:08:03.380 real 0m2.694s 00:08:03.380 user 0m2.412s 00:08:03.380 sys 0m0.186s 00:08:03.380 04:57:17 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.380 04:57:17 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:03.380 04:57:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:03.380 04:57:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:03.380 04:57:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.380 04:57:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.380 ************************************ 00:08:03.380 START TEST accel_decomp_mcore 00:08:03.380 ************************************ 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:03.380 04:57:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:03.380 [2024-07-24 04:57:17.538679] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:03.380 [2024-07-24 04:57:17.538908] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63101 ] 00:08:03.380 [2024-07-24 04:57:17.697674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.380 [2024-07-24 04:57:17.921138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.380 [2024-07-24 04:57:17.921274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.380 [2024-07-24 04:57:17.921226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.380 [2024-07-24 04:57:17.921309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.639 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.639 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.640 04:57:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.175 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.175 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.175 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.175 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.175 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.175 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.176 ************************************ 00:08:06.176 END TEST accel_decomp_mcore 00:08:06.176 ************************************ 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.176 00:08:06.176 real 0m2.722s 00:08:06.176 user 0m0.021s 00:08:06.176 sys 0m0.004s 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.176 04:57:20 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:06.176 04:57:20 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:06.176 04:57:20 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:06.176 04:57:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.176 04:57:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.176 ************************************ 00:08:06.176 START TEST accel_decomp_full_mcore 00:08:06.176 ************************************ 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:06.176 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:06.176 [2024-07-24 04:57:20.328664] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:06.176 [2024-07-24 04:57:20.328824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63156 ] 00:08:06.176 [2024-07-24 04:57:20.508438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.176 [2024-07-24 04:57:20.729299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.176 [2024-07-24 04:57:20.729479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.176 [2024-07-24 04:57:20.729641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.176 [2024-07-24 04:57:20.729734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.436 04:57:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.972 00:08:08.972 real 0m2.777s 00:08:08.972 user 0m0.018s 00:08:08.972 sys 0m0.006s 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.972 04:57:23 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:08.972 ************************************ 00:08:08.972 END TEST accel_decomp_full_mcore 00:08:08.972 ************************************ 00:08:08.972 04:57:23 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:08.972 04:57:23 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:08.972 04:57:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.972 04:57:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.972 ************************************ 00:08:08.972 START TEST accel_decomp_mthread 00:08:08.972 ************************************ 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:08.972 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:08.972 [2024-07-24 04:57:23.148660] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:08.972 [2024-07-24 04:57:23.148788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63200 ] 00:08:08.972 [2024-07-24 04:57:23.308530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.972 [2024-07-24 04:57:23.522342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.233 04:57:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.170 00:08:11.170 real 0m2.642s 00:08:11.170 user 0m2.376s 00:08:11.170 sys 0m0.180s 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.170 04:57:25 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:11.170 ************************************ 00:08:11.170 END TEST accel_decomp_mthread 00:08:11.170 ************************************ 00:08:11.170 04:57:25 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:11.170 04:57:25 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:11.170 04:57:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.170 04:57:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.429 ************************************ 00:08:11.429 START TEST accel_decomp_full_mthread 00:08:11.429 ************************************ 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:11.429 04:57:25 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:11.429 [2024-07-24 04:57:25.868736] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:11.429 [2024-07-24 04:57:25.868890] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63252 ] 00:08:11.429 [2024-07-24 04:57:26.046854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.689 [2024-07-24 04:57:26.264884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.948 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.949 04:57:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.481 00:08:14.481 real 0m2.728s 00:08:14.481 user 0m2.435s 00:08:14.481 sys 0m0.203s 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.481 04:57:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:14.481 ************************************ 00:08:14.481 END TEST accel_decomp_full_mthread 00:08:14.481 ************************************ 00:08:14.481 04:57:28 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:14.481 04:57:28 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:14.481 04:57:28 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:14.481 04:57:28 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:14.481 04:57:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.481 04:57:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.481 04:57:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.481 04:57:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.481 04:57:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.481 04:57:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.481 04:57:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.481 04:57:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:14.481 04:57:28 accel -- accel/accel.sh@41 -- # jq -r . 00:08:14.482 ************************************ 00:08:14.482 START TEST accel_dif_functional_tests 00:08:14.482 ************************************ 00:08:14.482 04:57:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:14.482 [2024-07-24 04:57:28.709836] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:14.482 [2024-07-24 04:57:28.709993] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63301 ] 00:08:14.482 [2024-07-24 04:57:28.891715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:14.482 [2024-07-24 04:57:29.108665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.482 [2024-07-24 04:57:29.108823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.482 [2024-07-24 04:57:29.108850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.740 [2024-07-24 04:57:29.349846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.998 00:08:14.998 00:08:14.998 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.998 http://cunit.sourceforge.net/ 00:08:14.998 00:08:14.998 00:08:14.998 Suite: accel_dif 00:08:14.998 Test: verify: DIF generated, GUARD check ...passed 00:08:14.998 Test: verify: DIF generated, APPTAG check ...passed 00:08:14.998 Test: verify: DIF generated, REFTAG check ...passed 00:08:14.998 Test: verify: DIF not generated, GUARD check ...passed 00:08:14.998 Test: verify: DIF not generated, APPTAG check ...passed 00:08:14.998 Test: verify: DIF not generated, REFTAG check ...passed 00:08:14.998 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:14.998 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:08:14.998 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:14.998 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:14.998 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-07-24 04:57:29.472460] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:14.998 [2024-07-24 04:57:29.472545] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:14.998 [2024-07-24 04:57:29.472586] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:14.998 [2024-07-24 04:57:29.472686] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:14.998 passed 00:08:14.998 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:08:14.998 Test: verify copy: DIF generated, GUARD check ...passed 00:08:14.998 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:14.998 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:14.998 Test: verify copy: DIF not generated, GUARD check ...[2024-07-24 04:57:29.472878] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:14.998 [2024-07-24 04:57:29.473086] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:14.998 passed 00:08:14.998 Test: verify copy: DIF not generated, APPTAG check ...passed 00:08:14.998 Test: verify copy: DIF not generated, REFTAG check ...passed 00:08:14.998 Test: generate copy: DIF generated, GUARD check ...passed 00:08:14.998 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:14.998 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:14.998 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-07-24 04:57:29.473137] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:14.998 [2024-07-24 04:57:29.473186] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:14.998 passed 00:08:14.998 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:14.998 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:14.998 Test: generate copy: iovecs-len validate ...passed 00:08:14.998 Test: generate copy: buffer alignment validate ...passed 00:08:14.998 00:08:14.998 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.998 suites 1 1 n/a 0 0 00:08:14.998 tests 26 26 26 0 0 00:08:14.998 asserts 115 115 115 0 n/a 00:08:14.998 00:08:14.998 Elapsed time = 0.005 seconds 00:08:14.998 [2024-07-24 04:57:29.473523] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:16.373 00:08:16.373 real 0m2.178s 00:08:16.373 user 0m4.281s 00:08:16.373 sys 0m0.265s 00:08:16.373 ************************************ 00:08:16.373 END TEST accel_dif_functional_tests 00:08:16.373 ************************************ 00:08:16.373 04:57:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.373 04:57:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:16.373 00:08:16.373 real 1m4.723s 00:08:16.373 user 1m10.593s 00:08:16.373 sys 0m6.176s 00:08:16.374 04:57:30 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.374 04:57:30 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.374 ************************************ 00:08:16.374 END TEST accel 00:08:16.374 ************************************ 00:08:16.374 04:57:30 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:16.374 04:57:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:16.374 04:57:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.374 04:57:30 -- common/autotest_common.sh@10 -- # set +x 00:08:16.374 ************************************ 00:08:16.374 START TEST accel_rpc 00:08:16.374 ************************************ 00:08:16.374 04:57:30 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:16.374 * Looking for test storage... 00:08:16.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:16.374 04:57:30 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:16.374 04:57:30 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=63387 00:08:16.374 04:57:30 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:16.374 04:57:30 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 63387 00:08:16.374 04:57:30 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 63387 ']' 00:08:16.374 04:57:30 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.374 04:57:30 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.374 04:57:30 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.374 04:57:30 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.374 04:57:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.632 [2024-07-24 04:57:31.112798] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:16.632 [2024-07-24 04:57:31.112969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63387 ] 00:08:16.891 [2024-07-24 04:57:31.295639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.891 [2024-07-24 04:57:31.507477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.458 04:57:31 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.458 04:57:31 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:17.458 04:57:31 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:17.458 04:57:31 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:17.458 04:57:31 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:17.458 04:57:31 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:17.458 04:57:31 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:17.458 04:57:31 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:17.458 04:57:31 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.458 04:57:31 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.458 ************************************ 00:08:17.458 START TEST accel_assign_opcode 00:08:17.458 ************************************ 00:08:17.458 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:17.458 04:57:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:17.458 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.458 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:17.458 [2024-07-24 04:57:32.012305] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:17.458 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.458 04:57:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:17.458 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.458 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:17.458 [2024-07-24 04:57:32.020286] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:17.458 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.458 04:57:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:17.459 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.459 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:17.717 [2024-07-24 04:57:32.256083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.285 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.285 04:57:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:18.285 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.285 04:57:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:18.285 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:18.285 04:57:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:18.285 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.544 software 00:08:18.544 00:08:18.544 real 0m0.919s 00:08:18.544 user 0m0.048s 00:08:18.544 sys 0m0.015s 00:08:18.544 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.544 ************************************ 00:08:18.544 04:57:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:18.544 END TEST accel_assign_opcode 00:08:18.544 ************************************ 00:08:18.544 04:57:32 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 63387 00:08:18.544 04:57:32 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 63387 ']' 00:08:18.544 04:57:32 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 63387 00:08:18.544 04:57:32 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:18.544 04:57:32 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:18.544 04:57:32 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63387 00:08:18.544 04:57:32 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:18.544 killing process with pid 63387 00:08:18.544 04:57:32 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:18.544 04:57:32 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63387' 00:08:18.544 04:57:33 accel_rpc -- common/autotest_common.sh@967 -- # kill 63387 00:08:18.544 04:57:33 accel_rpc -- common/autotest_common.sh@972 -- # wait 63387 00:08:21.075 00:08:21.075 real 0m4.554s 00:08:21.075 user 0m4.494s 00:08:21.075 sys 0m0.589s 00:08:21.075 ************************************ 00:08:21.075 END TEST accel_rpc 00:08:21.075 ************************************ 00:08:21.075 04:57:35 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.075 04:57:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.075 04:57:35 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:21.075 04:57:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:21.075 04:57:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.075 04:57:35 -- common/autotest_common.sh@10 -- # set +x 00:08:21.075 ************************************ 00:08:21.075 START TEST app_cmdline 00:08:21.075 ************************************ 00:08:21.075 04:57:35 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:21.075 * Looking for test storage... 00:08:21.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:21.075 04:57:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:21.075 04:57:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63504 00:08:21.075 04:57:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63504 00:08:21.075 04:57:35 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 63504 ']' 00:08:21.075 04:57:35 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.075 04:57:35 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:21.075 04:57:35 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.075 04:57:35 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.075 04:57:35 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.075 04:57:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:21.333 [2024-07-24 04:57:35.740919] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:21.333 [2024-07-24 04:57:35.741092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63504 ] 00:08:21.333 [2024-07-24 04:57:35.923437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.592 [2024-07-24 04:57:36.143657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.850 [2024-07-24 04:57:36.374900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.418 04:57:37 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.418 04:57:37 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:22.418 04:57:37 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:22.677 { 00:08:22.677 "version": "SPDK v24.09-pre git sha1 78cbcfdde", 00:08:22.677 "fields": { 00:08:22.677 "major": 24, 00:08:22.677 "minor": 9, 00:08:22.677 "patch": 0, 00:08:22.677 "suffix": "-pre", 00:08:22.677 "commit": "78cbcfdde" 00:08:22.677 } 00:08:22.677 } 00:08:22.677 04:57:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:22.677 04:57:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:22.677 04:57:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:22.677 04:57:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:22.677 04:57:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:22.677 04:57:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:22.677 04:57:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:22.677 04:57:37 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.677 04:57:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:22.677 04:57:37 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.677 04:57:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:22.677 04:57:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:22.677 04:57:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:22.677 04:57:37 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:22.677 04:57:37 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:22.677 04:57:37 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.677 04:57:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.677 04:57:37 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.677 04:57:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.677 04:57:37 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.677 04:57:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.677 04:57:37 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.677 04:57:37 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:22.677 04:57:37 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:22.936 request: 00:08:22.936 { 00:08:22.936 "method": "env_dpdk_get_mem_stats", 00:08:22.936 "req_id": 1 00:08:22.936 } 00:08:22.936 Got JSON-RPC error response 00:08:22.936 response: 00:08:22.936 { 00:08:22.936 "code": -32601, 00:08:22.936 "message": "Method not found" 00:08:22.936 } 00:08:22.936 04:57:37 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:22.936 04:57:37 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:22.936 04:57:37 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:22.936 04:57:37 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:22.936 04:57:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63504 00:08:22.936 04:57:37 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 63504 ']' 00:08:22.936 04:57:37 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 63504 00:08:22.936 04:57:37 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:22.936 04:57:37 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:22.936 04:57:37 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63504 00:08:23.195 04:57:37 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:23.195 killing process with pid 63504 00:08:23.195 04:57:37 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:23.195 04:57:37 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63504' 00:08:23.195 04:57:37 app_cmdline -- common/autotest_common.sh@967 -- # kill 63504 00:08:23.195 04:57:37 app_cmdline -- common/autotest_common.sh@972 -- # wait 63504 00:08:25.753 00:08:25.753 real 0m4.529s 00:08:25.753 user 0m4.876s 00:08:25.753 sys 0m0.609s 00:08:25.753 04:57:40 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.753 04:57:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:25.753 ************************************ 00:08:25.753 END TEST app_cmdline 00:08:25.753 ************************************ 00:08:25.753 04:57:40 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:25.753 04:57:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:25.753 04:57:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.753 04:57:40 -- common/autotest_common.sh@10 -- # set +x 00:08:25.753 ************************************ 00:08:25.753 START TEST version 00:08:25.753 ************************************ 00:08:25.753 04:57:40 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:25.753 * Looking for test storage... 00:08:25.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:25.753 04:57:40 version -- app/version.sh@17 -- # get_header_version major 00:08:25.753 04:57:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:25.753 04:57:40 version -- app/version.sh@14 -- # cut -f2 00:08:25.753 04:57:40 version -- app/version.sh@14 -- # tr -d '"' 00:08:25.753 04:57:40 version -- app/version.sh@17 -- # major=24 00:08:25.753 04:57:40 version -- app/version.sh@18 -- # get_header_version minor 00:08:25.753 04:57:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:25.753 04:57:40 version -- app/version.sh@14 -- # cut -f2 00:08:25.753 04:57:40 version -- app/version.sh@14 -- # tr -d '"' 00:08:25.753 04:57:40 version -- app/version.sh@18 -- # minor=9 00:08:25.753 04:57:40 version -- app/version.sh@19 -- # get_header_version patch 00:08:25.753 04:57:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:25.753 04:57:40 version -- app/version.sh@14 -- # tr -d '"' 00:08:25.753 04:57:40 version -- app/version.sh@14 -- # cut -f2 00:08:25.753 04:57:40 version -- app/version.sh@19 -- # patch=0 00:08:25.753 04:57:40 version -- app/version.sh@20 -- # get_header_version suffix 00:08:25.753 04:57:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:25.753 04:57:40 version -- app/version.sh@14 -- # cut -f2 00:08:25.753 04:57:40 version -- app/version.sh@14 -- # tr -d '"' 00:08:25.753 04:57:40 version -- app/version.sh@20 -- # suffix=-pre 00:08:25.753 04:57:40 version -- app/version.sh@22 -- # version=24.9 00:08:25.753 04:57:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:25.753 04:57:40 version -- app/version.sh@28 -- # version=24.9rc0 00:08:25.753 04:57:40 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:25.753 04:57:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:25.753 04:57:40 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:25.753 04:57:40 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:25.753 00:08:25.753 real 0m0.174s 00:08:25.753 user 0m0.087s 00:08:25.753 sys 0m0.123s 00:08:25.753 04:57:40 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.753 04:57:40 version -- common/autotest_common.sh@10 -- # set +x 00:08:25.753 ************************************ 00:08:25.753 END TEST version 00:08:25.753 ************************************ 00:08:25.753 04:57:40 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:25.753 04:57:40 -- spdk/autotest.sh@198 -- # uname -s 00:08:25.753 04:57:40 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:25.753 04:57:40 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:25.753 04:57:40 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:08:25.753 04:57:40 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:08:25.753 04:57:40 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:25.753 04:57:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:25.753 04:57:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.753 04:57:40 -- common/autotest_common.sh@10 -- # set +x 00:08:25.753 ************************************ 00:08:25.753 START TEST spdk_dd 00:08:25.753 ************************************ 00:08:25.753 04:57:40 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:26.013 * Looking for test storage... 00:08:26.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:26.013 04:57:40 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.013 04:57:40 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.013 04:57:40 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.013 04:57:40 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.013 04:57:40 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.013 04:57:40 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.013 04:57:40 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.013 04:57:40 spdk_dd -- paths/export.sh@5 -- # export PATH 00:08:26.013 04:57:40 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.013 04:57:40 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:26.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:26.273 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:26.273 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:26.273 04:57:40 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:26.273 04:57:40 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@230 -- # local class 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@232 -- # local progif 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@233 -- # class=01 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@15 -- # local i 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@24 -- # return 0 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@15 -- # local i 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@24 -- # return 0 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:08:26.273 04:57:40 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:26.534 04:57:40 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:08:26.534 04:57:40 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:08:26.534 04:57:40 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:08:26.534 04:57:40 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:08:26.534 04:57:40 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:26.534 04:57:40 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@139 -- # local lib 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:08:26.534 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:26.535 * spdk_dd linked to liburing 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:26.535 04:57:40 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:26.535 04:57:40 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:26.535 04:57:40 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:08:26.535 04:57:40 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:26.535 04:57:40 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:26.535 04:57:40 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:26.535 04:57:40 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:26.535 04:57:40 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:26.535 04:57:40 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:26.535 04:57:40 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:26.535 04:57:40 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:26.536 04:57:40 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:08:26.536 04:57:40 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:08:26.536 04:57:40 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:08:26.536 04:57:40 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:08:26.536 04:57:40 spdk_dd -- dd/common.sh@153 -- # return 0 00:08:26.536 04:57:40 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:26.536 04:57:40 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:26.536 04:57:40 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:26.536 04:57:40 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.536 04:57:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:26.536 ************************************ 00:08:26.536 START TEST spdk_dd_basic_rw 00:08:26.536 ************************************ 00:08:26.536 04:57:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:26.536 * Looking for test storage... 00:08:26.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:26.536 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.536 04:57:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.536 04:57:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.536 04:57:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.536 04:57:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.536 04:57:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.536 04:57:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.536 04:57:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:08:26.537 04:57:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.537 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:26.537 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:26.537 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:26.537 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:08:26.537 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:26.537 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:26.537 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:26.537 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:26.537 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.537 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:08:26.537 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:08:26.537 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:08:26.537 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:08:26.799 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:26.799 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:08:26.799 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:26.799 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:08:26.799 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:26.800 ************************************ 00:08:26.800 START TEST dd_bs_lt_native_bs 00:08:26.800 ************************************ 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.800 04:57:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:27.059 { 00:08:27.059 "subsystems": [ 00:08:27.059 { 00:08:27.059 "subsystem": "bdev", 00:08:27.059 "config": [ 00:08:27.059 { 00:08:27.059 "params": { 00:08:27.059 "trtype": "pcie", 00:08:27.059 "traddr": "0000:00:10.0", 00:08:27.059 "name": "Nvme0" 00:08:27.059 }, 00:08:27.059 "method": "bdev_nvme_attach_controller" 00:08:27.059 }, 00:08:27.059 { 00:08:27.059 "method": "bdev_wait_for_examine" 00:08:27.059 } 00:08:27.059 ] 00:08:27.059 } 00:08:27.059 ] 00:08:27.059 } 00:08:27.060 [2024-07-24 04:57:41.523696] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:27.060 [2024-07-24 04:57:41.523861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63855 ] 00:08:27.319 [2024-07-24 04:57:41.701289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.578 [2024-07-24 04:57:42.035355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.837 [2024-07-24 04:57:42.267440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:27.837 [2024-07-24 04:57:42.461475] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:27.837 [2024-07-24 04:57:42.461571] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.775 [2024-07-24 04:57:43.046177] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:29.034 00:08:29.034 real 0m2.088s 00:08:29.034 user 0m1.770s 00:08:29.034 sys 0m0.264s 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.034 ************************************ 00:08:29.034 END TEST dd_bs_lt_native_bs 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:08:29.034 ************************************ 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:29.034 ************************************ 00:08:29.034 START TEST dd_rw 00:08:29.034 ************************************ 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:29.034 04:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 04:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:29.603 04:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:29.603 04:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:29.603 04:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:29.603 { 00:08:29.603 "subsystems": [ 00:08:29.603 { 00:08:29.603 "subsystem": "bdev", 00:08:29.603 "config": [ 00:08:29.603 { 00:08:29.603 "params": { 00:08:29.603 "trtype": "pcie", 00:08:29.603 "traddr": "0000:00:10.0", 00:08:29.603 "name": "Nvme0" 00:08:29.603 }, 00:08:29.603 "method": "bdev_nvme_attach_controller" 00:08:29.603 }, 00:08:29.603 { 00:08:29.603 "method": "bdev_wait_for_examine" 00:08:29.603 } 00:08:29.603 ] 00:08:29.603 } 00:08:29.603 ] 00:08:29.603 } 00:08:29.603 [2024-07-24 04:57:44.233333] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:29.603 [2024-07-24 04:57:44.233494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63909 ] 00:08:29.862 [2024-07-24 04:57:44.415624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.121 [2024-07-24 04:57:44.628310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.380 [2024-07-24 04:57:44.860294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:32.018  Copying: 60/60 [kB] (average 19 MBps) 00:08:32.018 00:08:32.018 04:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:32.018 04:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:32.018 04:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:32.018 04:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:32.018 { 00:08:32.018 "subsystems": [ 00:08:32.018 { 00:08:32.018 "subsystem": "bdev", 00:08:32.018 "config": [ 00:08:32.018 { 00:08:32.018 "params": { 00:08:32.018 "trtype": "pcie", 00:08:32.018 "traddr": "0000:00:10.0", 00:08:32.018 "name": "Nvme0" 00:08:32.018 }, 00:08:32.018 "method": "bdev_nvme_attach_controller" 00:08:32.018 }, 00:08:32.018 { 00:08:32.018 "method": "bdev_wait_for_examine" 00:08:32.018 } 00:08:32.018 ] 00:08:32.018 } 00:08:32.018 ] 00:08:32.018 } 00:08:32.018 [2024-07-24 04:57:46.454674] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:32.018 [2024-07-24 04:57:46.454836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63940 ] 00:08:32.018 [2024-07-24 04:57:46.635637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.277 [2024-07-24 04:57:46.855777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.536 [2024-07-24 04:57:47.091447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:33.732  Copying: 60/60 [kB] (average 19 MBps) 00:08:33.732 00:08:33.732 04:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:33.732 04:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:33.732 04:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:33.732 04:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:33.732 04:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:33.732 04:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:33.732 04:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:33.732 04:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:33.732 04:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:33.732 04:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:33.732 04:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:33.991 { 00:08:33.991 "subsystems": [ 00:08:33.991 { 00:08:33.991 "subsystem": "bdev", 00:08:33.991 "config": [ 00:08:33.991 { 00:08:33.991 "params": { 00:08:33.991 "trtype": "pcie", 00:08:33.991 "traddr": "0000:00:10.0", 00:08:33.991 "name": "Nvme0" 00:08:33.991 }, 00:08:33.991 "method": "bdev_nvme_attach_controller" 00:08:33.992 }, 00:08:33.992 { 00:08:33.992 "method": "bdev_wait_for_examine" 00:08:33.992 } 00:08:33.992 ] 00:08:33.992 } 00:08:33.992 ] 00:08:33.992 } 00:08:33.992 [2024-07-24 04:57:48.445448] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:33.992 [2024-07-24 04:57:48.445624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63973 ] 00:08:34.250 [2024-07-24 04:57:48.628330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.250 [2024-07-24 04:57:48.841850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.509 [2024-07-24 04:57:49.079494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:36.148  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:36.148 00:08:36.148 04:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:36.148 04:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:36.148 04:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:36.148 04:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:36.148 04:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:36.148 04:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:36.148 04:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:36.716 04:57:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:36.716 04:57:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:36.716 04:57:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:36.716 04:57:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:36.716 { 00:08:36.716 "subsystems": [ 00:08:36.716 { 00:08:36.716 "subsystem": "bdev", 00:08:36.716 "config": [ 00:08:36.716 { 00:08:36.716 "params": { 00:08:36.716 "trtype": "pcie", 00:08:36.716 "traddr": "0000:00:10.0", 00:08:36.716 "name": "Nvme0" 00:08:36.716 }, 00:08:36.716 "method": "bdev_nvme_attach_controller" 00:08:36.716 }, 00:08:36.716 { 00:08:36.716 "method": "bdev_wait_for_examine" 00:08:36.716 } 00:08:36.716 ] 00:08:36.716 } 00:08:36.716 ] 00:08:36.716 } 00:08:36.716 [2024-07-24 04:57:51.208449] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:36.716 [2024-07-24 04:57:51.208622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64014 ] 00:08:36.975 [2024-07-24 04:57:51.390027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.975 [2024-07-24 04:57:51.604324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.234 [2024-07-24 04:57:51.838130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:38.430  Copying: 60/60 [kB] (average 58 MBps) 00:08:38.430 00:08:38.690 04:57:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:38.690 04:57:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:38.690 04:57:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:38.690 04:57:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:38.690 { 00:08:38.690 "subsystems": [ 00:08:38.690 { 00:08:38.690 "subsystem": "bdev", 00:08:38.690 "config": [ 00:08:38.690 { 00:08:38.690 "params": { 00:08:38.690 "trtype": "pcie", 00:08:38.690 "traddr": "0000:00:10.0", 00:08:38.690 "name": "Nvme0" 00:08:38.690 }, 00:08:38.690 "method": "bdev_nvme_attach_controller" 00:08:38.690 }, 00:08:38.690 { 00:08:38.690 "method": "bdev_wait_for_examine" 00:08:38.690 } 00:08:38.690 ] 00:08:38.690 } 00:08:38.690 ] 00:08:38.690 } 00:08:38.690 [2024-07-24 04:57:53.175799] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:38.690 [2024-07-24 04:57:53.175958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64041 ] 00:08:38.949 [2024-07-24 04:57:53.346668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.949 [2024-07-24 04:57:53.554952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.208 [2024-07-24 04:57:53.778631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:40.843  Copying: 60/60 [kB] (average 58 MBps) 00:08:40.843 00:08:40.843 04:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:40.844 04:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:40.844 04:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:40.844 04:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:40.844 04:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:40.844 04:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:40.844 04:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:40.844 04:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:40.844 04:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:40.844 04:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:40.844 04:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:40.844 { 00:08:40.844 "subsystems": [ 00:08:40.844 { 00:08:40.844 "subsystem": "bdev", 00:08:40.844 "config": [ 00:08:40.844 { 00:08:40.844 "params": { 00:08:40.844 "trtype": "pcie", 00:08:40.844 "traddr": "0000:00:10.0", 00:08:40.844 "name": "Nvme0" 00:08:40.844 }, 00:08:40.844 "method": "bdev_nvme_attach_controller" 00:08:40.844 }, 00:08:40.844 { 00:08:40.844 "method": "bdev_wait_for_examine" 00:08:40.844 } 00:08:40.844 ] 00:08:40.844 } 00:08:40.844 ] 00:08:40.844 } 00:08:40.844 [2024-07-24 04:57:55.372097] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:40.844 [2024-07-24 04:57:55.372260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64074 ] 00:08:41.102 [2024-07-24 04:57:55.547572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.360 [2024-07-24 04:57:55.761666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.361 [2024-07-24 04:57:55.992018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:42.995  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:42.995 00:08:42.995 04:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:42.995 04:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:42.995 04:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:42.995 04:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:42.995 04:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:42.995 04:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:42.995 04:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:42.995 04:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:43.254 04:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:43.254 04:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:43.254 04:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:43.254 04:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:43.254 { 00:08:43.254 "subsystems": [ 00:08:43.254 { 00:08:43.254 "subsystem": "bdev", 00:08:43.254 "config": [ 00:08:43.254 { 00:08:43.254 "params": { 00:08:43.254 "trtype": "pcie", 00:08:43.254 "traddr": "0000:00:10.0", 00:08:43.254 "name": "Nvme0" 00:08:43.254 }, 00:08:43.254 "method": "bdev_nvme_attach_controller" 00:08:43.254 }, 00:08:43.254 { 00:08:43.254 "method": "bdev_wait_for_examine" 00:08:43.254 } 00:08:43.254 ] 00:08:43.254 } 00:08:43.254 ] 00:08:43.254 } 00:08:43.254 [2024-07-24 04:57:57.826790] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:43.254 [2024-07-24 04:57:57.826954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64110 ] 00:08:43.513 [2024-07-24 04:57:58.004352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.772 [2024-07-24 04:57:58.225540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.030 [2024-07-24 04:57:58.460598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:45.409  Copying: 56/56 [kB] (average 27 MBps) 00:08:45.409 00:08:45.409 04:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:45.409 04:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:45.409 04:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:45.409 04:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:45.409 { 00:08:45.409 "subsystems": [ 00:08:45.409 { 00:08:45.409 "subsystem": "bdev", 00:08:45.409 "config": [ 00:08:45.409 { 00:08:45.409 "params": { 00:08:45.409 "trtype": "pcie", 00:08:45.409 "traddr": "0000:00:10.0", 00:08:45.409 "name": "Nvme0" 00:08:45.409 }, 00:08:45.409 "method": "bdev_nvme_attach_controller" 00:08:45.409 }, 00:08:45.409 { 00:08:45.409 "method": "bdev_wait_for_examine" 00:08:45.409 } 00:08:45.409 ] 00:08:45.409 } 00:08:45.409 ] 00:08:45.409 } 00:08:45.668 [2024-07-24 04:58:00.063556] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:45.668 [2024-07-24 04:58:00.063727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64141 ] 00:08:45.668 [2024-07-24 04:58:00.237507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.927 [2024-07-24 04:58:00.453363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.186 [2024-07-24 04:58:00.673945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:47.383  Copying: 56/56 [kB] (average 27 MBps) 00:08:47.383 00:08:47.383 04:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:47.383 04:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:47.383 04:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:47.383 04:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:47.383 04:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:47.383 04:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:47.383 04:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:47.383 04:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:47.383 04:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:47.383 04:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:47.383 04:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:47.383 { 00:08:47.383 "subsystems": [ 00:08:47.383 { 00:08:47.383 "subsystem": "bdev", 00:08:47.383 "config": [ 00:08:47.383 { 00:08:47.383 "params": { 00:08:47.383 "trtype": "pcie", 00:08:47.383 "traddr": "0000:00:10.0", 00:08:47.383 "name": "Nvme0" 00:08:47.383 }, 00:08:47.383 "method": "bdev_nvme_attach_controller" 00:08:47.383 }, 00:08:47.383 { 00:08:47.383 "method": "bdev_wait_for_examine" 00:08:47.383 } 00:08:47.383 ] 00:08:47.383 } 00:08:47.383 ] 00:08:47.383 } 00:08:47.383 [2024-07-24 04:58:01.987712] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:47.383 [2024-07-24 04:58:01.987833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64173 ] 00:08:47.646 [2024-07-24 04:58:02.145373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.921 [2024-07-24 04:58:02.365002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.180 [2024-07-24 04:58:02.600275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:49.558  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:49.558 00:08:49.558 04:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:49.558 04:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:49.558 04:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:49.558 04:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:49.558 04:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:49.558 04:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:49.558 04:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:50.126 04:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:50.126 04:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:50.126 04:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:50.126 04:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:50.126 { 00:08:50.126 "subsystems": [ 00:08:50.126 { 00:08:50.126 "subsystem": "bdev", 00:08:50.126 "config": [ 00:08:50.126 { 00:08:50.126 "params": { 00:08:50.126 "trtype": "pcie", 00:08:50.126 "traddr": "0000:00:10.0", 00:08:50.126 "name": "Nvme0" 00:08:50.126 }, 00:08:50.126 "method": "bdev_nvme_attach_controller" 00:08:50.126 }, 00:08:50.126 { 00:08:50.126 "method": "bdev_wait_for_examine" 00:08:50.126 } 00:08:50.126 ] 00:08:50.126 } 00:08:50.126 ] 00:08:50.126 } 00:08:50.126 [2024-07-24 04:58:04.634256] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:50.126 [2024-07-24 04:58:04.634380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64211 ] 00:08:50.385 [2024-07-24 04:58:04.795706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.386 [2024-07-24 04:58:05.001739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.645 [2024-07-24 04:58:05.239283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:51.842  Copying: 56/56 [kB] (average 54 MBps) 00:08:51.842 00:08:52.102 04:58:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:52.102 04:58:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:52.102 04:58:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:52.102 04:58:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:52.102 { 00:08:52.102 "subsystems": [ 00:08:52.102 { 00:08:52.102 "subsystem": "bdev", 00:08:52.102 "config": [ 00:08:52.102 { 00:08:52.102 "params": { 00:08:52.102 "trtype": "pcie", 00:08:52.102 "traddr": "0000:00:10.0", 00:08:52.102 "name": "Nvme0" 00:08:52.102 }, 00:08:52.102 "method": "bdev_nvme_attach_controller" 00:08:52.102 }, 00:08:52.102 { 00:08:52.102 "method": "bdev_wait_for_examine" 00:08:52.102 } 00:08:52.102 ] 00:08:52.102 } 00:08:52.102 ] 00:08:52.102 } 00:08:52.102 [2024-07-24 04:58:06.606310] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:52.102 [2024-07-24 04:58:06.606470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64242 ] 00:08:52.361 [2024-07-24 04:58:06.779554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.361 [2024-07-24 04:58:06.989929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.622 [2024-07-24 04:58:07.224656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:54.259  Copying: 56/56 [kB] (average 54 MBps) 00:08:54.259 00:08:54.259 04:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:54.259 04:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:54.259 04:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:54.259 04:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:54.259 04:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:54.259 04:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:54.259 04:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:54.259 04:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:54.259 04:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:54.259 04:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:54.259 04:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:54.259 { 00:08:54.259 "subsystems": [ 00:08:54.259 { 00:08:54.259 "subsystem": "bdev", 00:08:54.259 "config": [ 00:08:54.259 { 00:08:54.259 "params": { 00:08:54.259 "trtype": "pcie", 00:08:54.259 "traddr": "0000:00:10.0", 00:08:54.259 "name": "Nvme0" 00:08:54.259 }, 00:08:54.259 "method": "bdev_nvme_attach_controller" 00:08:54.259 }, 00:08:54.259 { 00:08:54.259 "method": "bdev_wait_for_examine" 00:08:54.259 } 00:08:54.259 ] 00:08:54.259 } 00:08:54.259 ] 00:08:54.259 } 00:08:54.259 [2024-07-24 04:58:08.827106] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:54.259 [2024-07-24 04:58:08.827269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64275 ] 00:08:54.518 [2024-07-24 04:58:08.998509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.777 [2024-07-24 04:58:09.213135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.036 [2024-07-24 04:58:09.442385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:56.414  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:56.414 00:08:56.414 04:58:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:56.414 04:58:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:56.414 04:58:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:56.414 04:58:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:56.414 04:58:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:56.414 04:58:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:56.414 04:58:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:56.414 04:58:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:56.674 04:58:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:56.674 04:58:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:56.674 04:58:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:56.674 04:58:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:56.674 { 00:08:56.674 "subsystems": [ 00:08:56.674 { 00:08:56.674 "subsystem": "bdev", 00:08:56.674 "config": [ 00:08:56.674 { 00:08:56.674 "params": { 00:08:56.674 "trtype": "pcie", 00:08:56.674 "traddr": "0000:00:10.0", 00:08:56.674 "name": "Nvme0" 00:08:56.674 }, 00:08:56.674 "method": "bdev_nvme_attach_controller" 00:08:56.674 }, 00:08:56.674 { 00:08:56.674 "method": "bdev_wait_for_examine" 00:08:56.674 } 00:08:56.674 ] 00:08:56.674 } 00:08:56.674 ] 00:08:56.674 } 00:08:56.674 [2024-07-24 04:58:11.223244] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:56.674 [2024-07-24 04:58:11.223411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64306 ] 00:08:56.933 [2024-07-24 04:58:11.395352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.193 [2024-07-24 04:58:11.617675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.452 [2024-07-24 04:58:11.854339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:58.829  Copying: 48/48 [kB] (average 46 MBps) 00:08:58.829 00:08:58.829 04:58:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:58.829 04:58:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:58.829 04:58:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:58.829 04:58:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:58.829 { 00:08:58.829 "subsystems": [ 00:08:58.829 { 00:08:58.829 "subsystem": "bdev", 00:08:58.829 "config": [ 00:08:58.829 { 00:08:58.829 "params": { 00:08:58.829 "trtype": "pcie", 00:08:58.829 "traddr": "0000:00:10.0", 00:08:58.829 "name": "Nvme0" 00:08:58.829 }, 00:08:58.829 "method": "bdev_nvme_attach_controller" 00:08:58.829 }, 00:08:58.829 { 00:08:58.829 "method": "bdev_wait_for_examine" 00:08:58.829 } 00:08:58.829 ] 00:08:58.829 } 00:08:58.829 ] 00:08:58.829 } 00:08:58.829 [2024-07-24 04:58:13.439714] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:08:58.829 [2024-07-24 04:58:13.439876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64338 ] 00:08:59.087 [2024-07-24 04:58:13.610325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.346 [2024-07-24 04:58:13.826324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.605 [2024-07-24 04:58:14.055358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:00.799  Copying: 48/48 [kB] (average 46 MBps) 00:09:00.799 00:09:00.799 04:58:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:00.799 04:58:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:09:00.799 04:58:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:00.799 04:58:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:00.799 04:58:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:09:00.799 04:58:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:00.799 04:58:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:00.799 04:58:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:00.799 04:58:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:00.799 04:58:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:00.799 04:58:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:00.799 { 00:09:00.799 "subsystems": [ 00:09:00.799 { 00:09:00.799 "subsystem": "bdev", 00:09:00.799 "config": [ 00:09:00.799 { 00:09:00.799 "params": { 00:09:00.799 "trtype": "pcie", 00:09:00.799 "traddr": "0000:00:10.0", 00:09:00.799 "name": "Nvme0" 00:09:00.799 }, 00:09:00.799 "method": "bdev_nvme_attach_controller" 00:09:00.799 }, 00:09:00.799 { 00:09:00.799 "method": "bdev_wait_for_examine" 00:09:00.799 } 00:09:00.799 ] 00:09:00.799 } 00:09:00.799 ] 00:09:00.799 } 00:09:00.799 [2024-07-24 04:58:15.400772] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:00.799 [2024-07-24 04:58:15.400935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64371 ] 00:09:01.058 [2024-07-24 04:58:15.584355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.317 [2024-07-24 04:58:15.801656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.576 [2024-07-24 04:58:16.039469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:03.248  Copying: 1024/1024 [kB] (average 500 MBps) 00:09:03.248 00:09:03.248 04:58:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:03.248 04:58:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:09:03.248 04:58:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:09:03.248 04:58:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:09:03.248 04:58:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:09:03.248 04:58:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:03.248 04:58:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:03.507 04:58:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:09:03.507 04:58:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:03.507 04:58:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:03.507 04:58:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:03.507 { 00:09:03.507 "subsystems": [ 00:09:03.507 { 00:09:03.507 "subsystem": "bdev", 00:09:03.507 "config": [ 00:09:03.507 { 00:09:03.507 "params": { 00:09:03.507 "trtype": "pcie", 00:09:03.507 "traddr": "0000:00:10.0", 00:09:03.507 "name": "Nvme0" 00:09:03.507 }, 00:09:03.507 "method": "bdev_nvme_attach_controller" 00:09:03.507 }, 00:09:03.507 { 00:09:03.507 "method": "bdev_wait_for_examine" 00:09:03.507 } 00:09:03.507 ] 00:09:03.507 } 00:09:03.507 ] 00:09:03.507 } 00:09:03.507 [2024-07-24 04:58:18.063037] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:03.507 [2024-07-24 04:58:18.063196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64408 ] 00:09:03.766 [2024-07-24 04:58:18.234080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.025 [2024-07-24 04:58:18.448295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.283 [2024-07-24 04:58:18.679388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:05.661  Copying: 48/48 [kB] (average 46 MBps) 00:09:05.661 00:09:05.661 04:58:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:09:05.661 04:58:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:05.661 04:58:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:05.661 04:58:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:05.661 { 00:09:05.661 "subsystems": [ 00:09:05.661 { 00:09:05.661 "subsystem": "bdev", 00:09:05.661 "config": [ 00:09:05.661 { 00:09:05.661 "params": { 00:09:05.661 "trtype": "pcie", 00:09:05.661 "traddr": "0000:00:10.0", 00:09:05.661 "name": "Nvme0" 00:09:05.661 }, 00:09:05.661 "method": "bdev_nvme_attach_controller" 00:09:05.661 }, 00:09:05.661 { 00:09:05.661 "method": "bdev_wait_for_examine" 00:09:05.661 } 00:09:05.661 ] 00:09:05.661 } 00:09:05.661 ] 00:09:05.661 } 00:09:05.661 [2024-07-24 04:58:20.016283] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:05.661 [2024-07-24 04:58:20.016445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64439 ] 00:09:05.661 [2024-07-24 04:58:20.196633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.920 [2024-07-24 04:58:20.416228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.180 [2024-07-24 04:58:20.643799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:07.817  Copying: 48/48 [kB] (average 46 MBps) 00:09:07.817 00:09:07.817 04:58:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:07.817 04:58:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:09:07.817 04:58:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:07.817 04:58:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:07.817 04:58:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:09:07.817 04:58:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:07.817 04:58:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:07.817 04:58:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:07.817 04:58:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:07.817 04:58:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:07.817 04:58:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:07.817 { 00:09:07.817 "subsystems": [ 00:09:07.817 { 00:09:07.817 "subsystem": "bdev", 00:09:07.817 "config": [ 00:09:07.817 { 00:09:07.817 "params": { 00:09:07.817 "trtype": "pcie", 00:09:07.817 "traddr": "0000:00:10.0", 00:09:07.817 "name": "Nvme0" 00:09:07.817 }, 00:09:07.817 "method": "bdev_nvme_attach_controller" 00:09:07.817 }, 00:09:07.817 { 00:09:07.817 "method": "bdev_wait_for_examine" 00:09:07.817 } 00:09:07.817 ] 00:09:07.817 } 00:09:07.817 ] 00:09:07.817 } 00:09:07.817 [2024-07-24 04:58:22.220407] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:07.817 [2024-07-24 04:58:22.220532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64473 ] 00:09:07.817 [2024-07-24 04:58:22.379794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.076 [2024-07-24 04:58:22.599341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.335 [2024-07-24 04:58:22.835130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:09.527  Copying: 1024/1024 [kB] (average 500 MBps) 00:09:09.527 00:09:09.527 00:09:09.527 real 0m40.509s 00:09:09.527 user 0m34.396s 00:09:09.527 sys 0m17.945s 00:09:09.527 04:58:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.527 04:58:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:09.527 ************************************ 00:09:09.527 END TEST dd_rw 00:09:09.527 ************************************ 00:09:09.527 04:58:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:09:09.527 04:58:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:09.527 04:58:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.527 04:58:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:09.527 ************************************ 00:09:09.527 START TEST dd_rw_offset 00:09:09.527 ************************************ 00:09:09.527 04:58:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:09:09.527 04:58:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:09:09.527 04:58:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:09:09.527 04:58:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:09:09.527 04:58:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:09.785 04:58:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:09:09.786 04:58:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=lwkf709o10och4487t9kqsapym8dukpahse31gxbcojeh2fdu4nw2m9uer1jtzqrsvns0b48r1ia3gv208wz34jvj8hk0w0ezb6179gdrdrxwlks45x2zeljfjc4j27nk6hzmlac5enfhe478hcs8pn1hbmn9b37x9o8j3ahn7oc42osd4vp4i6mpuf4h5fd4jgto4y0f4h72uqxhmrnsiog12plb6v4l32jxlctajv11nf3nnubh86ohp19rv1f842smyldx7ty326333c42qrdqx65fjexjtwge0hajbv6mwwrp63m2rnqa24jio49rp0kfypwvgs3iprnkqwf58pq5eerc5lp1yjhnlawj3eb7oyzfbay6zd8j242hkzwnfhkvc2ou6otcervmjoq5nvn9drreps853gfud0ftwt4dcefwdfqdaglyz48rlqr4fovrps5d0lsrkn7eehkphzfne3h0f08wns0qb3h8o5obupo1297ohfmmurnqqz8soz3lhn5w0ilazd29iegz3pbfb0qaqje7638e94gu2q7hmpedt8wi6bb9iv7jp1vhdklvq8aw2rytwk8e9nuk5nkyg8jtq53blwqwoi9u5pgv7kgmq0nt4d2lx2v9b3zlp1esgrjfxmzlh8qpo5k83a14cixcbd0wv8q6w2v3v0zjneddj2o3cr705m79ezmzd95pp7cuk212rcpr1qdbu34asfyu2va4tgckx3nmxiiv12hqpuih5ek1m037yjud5dmflh8epyyfu5giatbai0vtoab8arjihiellbprl6y56ceu4mxzubgwga8wcm0785d9b1tkpg4mc0ifok0o93uhebprbw813ddaipexsjipmn693qap16llbftcwcu5ztiisa5dmrrz7tgth5m4fxocj6krc0q2u1j1o9r1jdxoxjt8mguey848rnzqb5gmoycj4iytbsz5xdo9yvp9c0xzn32c7q9w18q7gmse00n012jeqlvpi711jkftxxvnbj91gczm0xy8to7qwinn3qixirjhxtlcynw9brgu3bqmzz8yvcf212r1k5mouwfbv6azqynrl7huy1c736a85gwxp9r161518bx7yz3ssz8hhqdsfs9f5bpfbs19d0iyo7u4n719u3lfyutnbqtqcq8tajzr8im7ztz0mgl9jffzuec7lycx4qa01ej1ih05a0ivv23p4pg3yx41rbyflikbkvea0a4n4h67nkinrvlfglqn8sz101n4ldrne81bhbslewkjesiuq04ixv93l1ls2zh19j84upx0m4q57rv1a4mr7yy9fse37g4c88rsc2tg68i7irtpevwn99drxghwnhmlrb3jjzslp8zwtiwjdfxztfrdz6ajl2evk2ccjr6wkr0kz9vt3c4qt1jd5ulqccbbhs80nttoa7kmxotndbiklzruzzvz7wp6i2r11fmjsm2vap4bcrcq8cip0zxqm5zj0hgb4gmhzb06lller56ianxq7zl0msz9ffzmcn3lfn40g4hic8c8gzu3e3qit22yrj1hcpv4q1fglmmj0fmwe4si6z1oka531uoz8oxya350dcrlmicmczb0eisj408n4uqggxws56ccnvmakyuoc30s10rkpaa70qh2pmjgw6mybn5prrbynz9vzqwd5r6mjsh6xjyp5bwz219z8cx6p64vz3x8jg13qkbqfyqeawtvd6vhlzcy4tr4j4fcpqtai8c04fenocxf70kehm7lw3ec6guii5wty064mhi8lvp7jtrdppnvl0b0yt012m9hcgz9gw4hxldvn5w355vwf98r52eonqp1s0jrxttlnagh7euxizfdb95bray1cd7wbwjrc8l264l4mobuvw31uvihxjzc0zj0ju86puypd5h2hanwbxw6txgj9oucg9i1ugxs9kvaxculyg4mn2iwf28jbvr8y0t8t7r1yek0fduw1kpqw70fnufpcf2jtdhmtznv53wz9u5y2glsfzgdrm704yh2n7as9a0q7q9tw1gqhqullh180725lfriekv8xn6rq4nnkdmmu40zjcqyfzffqgxlu9vm3nrokqe7ec8gw0tu52hjx7vfy7cf66krlvfsdymlywcepnu9a821z94i8akcn1bszhlsvdv6cbc2e6993ldttlk9oxirnxan9vdv1jjuf4swbyngalxlus39likv2g424wjmj5k52xfb2hq8gwsyccwbsoh5fx1hh8mrt77ifjcay90gff4rc4t8by8z8x3n5n1iiu7s4djjxqkudso5q0rjw5drmx0axrzvbmc6e4boqf6ezsien5z9jxr94jgwou1dvy36ehwn4xmjxrrwg7tphxaq3vof733pnqsl5rr28xaorifqwkejvusy5xlawqe19s7fmbbihhwpn3iothpyk4ao4glq9zwytmf9gjujot8xlbw3w0c1b4lo2l4razenpfksn5skhxzc4oojteupw4e60upp5600ojrpz84qorj7095hjp38eoc6vd8oapzq0scflll8m7ziqw6glkfkegl37k76xgqi3yak6cknyi1lfep64rdguihzezvn4nrgr8uskc6s9x8lhghayo05rb7ze938wcp37ppw6nty65ijv6o7scx2q0ygqqnqpwcrjxb75ulhqrgxcc4t5by9uw3vw8vj8y6wf3a40pvyn7csy8wnjq3rjhmabmzq83xumiwpdu1cm34ixp4mh3h3p6lne1xt1ftewsdrmu7pr0ky9rge5ofk35xd43sm9vkifskdi8yuas22qql9pff9rp8oca7zjhtjglb62tr8gmtpyklkm495k48pb2u39pk260v1vsa1a01n8619wa8trtu7d5nekvj9f495d38bz3qyfz507w7j77m20hooy5zxc202d6bzhmdv4n30p5lt2j8grrq01rg1p4z3fv3ct819vr6bbymwh4vv49ngixqpnlz35wakoz7skoaqofe8wcrd6snwuh9mkue9w4de8vbjh8x601gcy5xxh3xxzl55gstxxilflk1vw44k8olompus7dhxrpcu19k75hwjfxs0l0llhdf1cbw7uizusto8r41zp0ldjuo633ksmfklr4mk7vrt9udvzfgnejjj4l5xsi95c5mxwiwpz44ynx6h0fbo57962bkm4w717he6ktm3pmt7ao14ajjeglo5e40tobq4zsk8s8oxj3emx4uxcwozfj50axnjxyxesqi0zm477s08n4zmepfmqk6705xk7luwo7us9hrpwo9wlcnztcmny6kjuflqwe6bo7y1e3lygpixmjiwui7xusxl1bxwujk37hrpknpl0kd5zmxifum72mq6y61zpvwo80hqsjyxsof47j297zmvis1no74bvus0x9oag89mxl907m5w98psnmqwxrti4oalp744wc4ytoecdk7f3bs26gxcic0i8jwmajxwb44ao5zb2skwq4bjbb3vfbqhhz7w72evbvowwwwp76yz9sh46mx9bmndktzvhv6tl7nbzm809o87em9xonzycndldg3wwwfhd1pwouux1u085trpsmizllbnxxovfwr4suaelqs4ishfur8pimpddcn85xo41kes7qwq2ecovx1lqwyltk1vx6gebfpv0mgyl9td6qscvao440gvbf0v5gi1od37c6p4c9hosqfhsxsacx2lwou4i6uk4ho0dmw39b9vztnm8lsd9lddd12se43a3tmppf08jsdhnd7f840xf00lljjogz0o9dw25aepz1608lltgpxnkchabfxuzrixxlkao5hdps1tlryhsx1xv0rf2gqkb17gwd03micq3d9l11x9jws5h3xf8ss6xbyfbdp684qfhyfda9h5a0fsibd6hqnnxzt51f0xm2f0t84zskw060p1g55fk5ge2zkc1nucywnp866714mbjm0y16ucn8j46je9j5kuvzb3fwnnmrq1lpqhv3z71kuohir1hyyc4smiiz81barpa0ysb7kwwgp2xvd01f2dffmj4eic5ik1g3n 00:09:09.786 04:58:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:09:09.786 04:58:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:09:09.786 04:58:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:09:09.786 04:58:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:09.786 { 00:09:09.786 "subsystems": [ 00:09:09.786 { 00:09:09.786 "subsystem": "bdev", 00:09:09.786 "config": [ 00:09:09.786 { 00:09:09.786 "params": { 00:09:09.786 "trtype": "pcie", 00:09:09.786 "traddr": "0000:00:10.0", 00:09:09.786 "name": "Nvme0" 00:09:09.786 }, 00:09:09.786 "method": "bdev_nvme_attach_controller" 00:09:09.786 }, 00:09:09.786 { 00:09:09.786 "method": "bdev_wait_for_examine" 00:09:09.786 } 00:09:09.786 ] 00:09:09.786 } 00:09:09.786 ] 00:09:09.786 } 00:09:09.786 [2024-07-24 04:58:24.304635] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:09.786 [2024-07-24 04:58:24.304805] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64522 ] 00:09:10.044 [2024-07-24 04:58:24.487239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.302 [2024-07-24 04:58:24.704059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.562 [2024-07-24 04:58:24.944011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:11.938  Copying: 4096/4096 [B] (average 4000 kBps) 00:09:11.938 00:09:11.938 04:58:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:09:11.938 04:58:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:09:11.938 04:58:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:09:11.938 04:58:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:11.938 { 00:09:11.938 "subsystems": [ 00:09:11.938 { 00:09:11.938 "subsystem": "bdev", 00:09:11.938 "config": [ 00:09:11.938 { 00:09:11.938 "params": { 00:09:11.938 "trtype": "pcie", 00:09:11.938 "traddr": "0000:00:10.0", 00:09:11.938 "name": "Nvme0" 00:09:11.938 }, 00:09:11.938 "method": "bdev_nvme_attach_controller" 00:09:11.938 }, 00:09:11.938 { 00:09:11.938 "method": "bdev_wait_for_examine" 00:09:11.938 } 00:09:11.938 ] 00:09:11.938 } 00:09:11.938 ] 00:09:11.938 } 00:09:11.938 [2024-07-24 04:58:26.542617] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:11.938 [2024-07-24 04:58:26.542776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64553 ] 00:09:12.197 [2024-07-24 04:58:26.719374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.456 [2024-07-24 04:58:26.947981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.715 [2024-07-24 04:58:27.185949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:13.911  Copying: 4096/4096 [B] (average 4000 kBps) 00:09:13.911 00:09:13.911 04:58:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:09:13.912 04:58:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ lwkf709o10och4487t9kqsapym8dukpahse31gxbcojeh2fdu4nw2m9uer1jtzqrsvns0b48r1ia3gv208wz34jvj8hk0w0ezb6179gdrdrxwlks45x2zeljfjc4j27nk6hzmlac5enfhe478hcs8pn1hbmn9b37x9o8j3ahn7oc42osd4vp4i6mpuf4h5fd4jgto4y0f4h72uqxhmrnsiog12plb6v4l32jxlctajv11nf3nnubh86ohp19rv1f842smyldx7ty326333c42qrdqx65fjexjtwge0hajbv6mwwrp63m2rnqa24jio49rp0kfypwvgs3iprnkqwf58pq5eerc5lp1yjhnlawj3eb7oyzfbay6zd8j242hkzwnfhkvc2ou6otcervmjoq5nvn9drreps853gfud0ftwt4dcefwdfqdaglyz48rlqr4fovrps5d0lsrkn7eehkphzfne3h0f08wns0qb3h8o5obupo1297ohfmmurnqqz8soz3lhn5w0ilazd29iegz3pbfb0qaqje7638e94gu2q7hmpedt8wi6bb9iv7jp1vhdklvq8aw2rytwk8e9nuk5nkyg8jtq53blwqwoi9u5pgv7kgmq0nt4d2lx2v9b3zlp1esgrjfxmzlh8qpo5k83a14cixcbd0wv8q6w2v3v0zjneddj2o3cr705m79ezmzd95pp7cuk212rcpr1qdbu34asfyu2va4tgckx3nmxiiv12hqpuih5ek1m037yjud5dmflh8epyyfu5giatbai0vtoab8arjihiellbprl6y56ceu4mxzubgwga8wcm0785d9b1tkpg4mc0ifok0o93uhebprbw813ddaipexsjipmn693qap16llbftcwcu5ztiisa5dmrrz7tgth5m4fxocj6krc0q2u1j1o9r1jdxoxjt8mguey848rnzqb5gmoycj4iytbsz5xdo9yvp9c0xzn32c7q9w18q7gmse00n012jeqlvpi711jkftxxvnbj91gczm0xy8to7qwinn3qixirjhxtlcynw9brgu3bqmzz8yvcf212r1k5mouwfbv6azqynrl7huy1c736a85gwxp9r161518bx7yz3ssz8hhqdsfs9f5bpfbs19d0iyo7u4n719u3lfyutnbqtqcq8tajzr8im7ztz0mgl9jffzuec7lycx4qa01ej1ih05a0ivv23p4pg3yx41rbyflikbkvea0a4n4h67nkinrvlfglqn8sz101n4ldrne81bhbslewkjesiuq04ixv93l1ls2zh19j84upx0m4q57rv1a4mr7yy9fse37g4c88rsc2tg68i7irtpevwn99drxghwnhmlrb3jjzslp8zwtiwjdfxztfrdz6ajl2evk2ccjr6wkr0kz9vt3c4qt1jd5ulqccbbhs80nttoa7kmxotndbiklzruzzvz7wp6i2r11fmjsm2vap4bcrcq8cip0zxqm5zj0hgb4gmhzb06lller56ianxq7zl0msz9ffzmcn3lfn40g4hic8c8gzu3e3qit22yrj1hcpv4q1fglmmj0fmwe4si6z1oka531uoz8oxya350dcrlmicmczb0eisj408n4uqggxws56ccnvmakyuoc30s10rkpaa70qh2pmjgw6mybn5prrbynz9vzqwd5r6mjsh6xjyp5bwz219z8cx6p64vz3x8jg13qkbqfyqeawtvd6vhlzcy4tr4j4fcpqtai8c04fenocxf70kehm7lw3ec6guii5wty064mhi8lvp7jtrdppnvl0b0yt012m9hcgz9gw4hxldvn5w355vwf98r52eonqp1s0jrxttlnagh7euxizfdb95bray1cd7wbwjrc8l264l4mobuvw31uvihxjzc0zj0ju86puypd5h2hanwbxw6txgj9oucg9i1ugxs9kvaxculyg4mn2iwf28jbvr8y0t8t7r1yek0fduw1kpqw70fnufpcf2jtdhmtznv53wz9u5y2glsfzgdrm704yh2n7as9a0q7q9tw1gqhqullh180725lfriekv8xn6rq4nnkdmmu40zjcqyfzffqgxlu9vm3nrokqe7ec8gw0tu52hjx7vfy7cf66krlvfsdymlywcepnu9a821z94i8akcn1bszhlsvdv6cbc2e6993ldttlk9oxirnxan9vdv1jjuf4swbyngalxlus39likv2g424wjmj5k52xfb2hq8gwsyccwbsoh5fx1hh8mrt77ifjcay90gff4rc4t8by8z8x3n5n1iiu7s4djjxqkudso5q0rjw5drmx0axrzvbmc6e4boqf6ezsien5z9jxr94jgwou1dvy36ehwn4xmjxrrwg7tphxaq3vof733pnqsl5rr28xaorifqwkejvusy5xlawqe19s7fmbbihhwpn3iothpyk4ao4glq9zwytmf9gjujot8xlbw3w0c1b4lo2l4razenpfksn5skhxzc4oojteupw4e60upp5600ojrpz84qorj7095hjp38eoc6vd8oapzq0scflll8m7ziqw6glkfkegl37k76xgqi3yak6cknyi1lfep64rdguihzezvn4nrgr8uskc6s9x8lhghayo05rb7ze938wcp37ppw6nty65ijv6o7scx2q0ygqqnqpwcrjxb75ulhqrgxcc4t5by9uw3vw8vj8y6wf3a40pvyn7csy8wnjq3rjhmabmzq83xumiwpdu1cm34ixp4mh3h3p6lne1xt1ftewsdrmu7pr0ky9rge5ofk35xd43sm9vkifskdi8yuas22qql9pff9rp8oca7zjhtjglb62tr8gmtpyklkm495k48pb2u39pk260v1vsa1a01n8619wa8trtu7d5nekvj9f495d38bz3qyfz507w7j77m20hooy5zxc202d6bzhmdv4n30p5lt2j8grrq01rg1p4z3fv3ct819vr6bbymwh4vv49ngixqpnlz35wakoz7skoaqofe8wcrd6snwuh9mkue9w4de8vbjh8x601gcy5xxh3xxzl55gstxxilflk1vw44k8olompus7dhxrpcu19k75hwjfxs0l0llhdf1cbw7uizusto8r41zp0ldjuo633ksmfklr4mk7vrt9udvzfgnejjj4l5xsi95c5mxwiwpz44ynx6h0fbo57962bkm4w717he6ktm3pmt7ao14ajjeglo5e40tobq4zsk8s8oxj3emx4uxcwozfj50axnjxyxesqi0zm477s08n4zmepfmqk6705xk7luwo7us9hrpwo9wlcnztcmny6kjuflqwe6bo7y1e3lygpixmjiwui7xusxl1bxwujk37hrpknpl0kd5zmxifum72mq6y61zpvwo80hqsjyxsof47j297zmvis1no74bvus0x9oag89mxl907m5w98psnmqwxrti4oalp744wc4ytoecdk7f3bs26gxcic0i8jwmajxwb44ao5zb2skwq4bjbb3vfbqhhz7w72evbvowwwwp76yz9sh46mx9bmndktzvhv6tl7nbzm809o87em9xonzycndldg3wwwfhd1pwouux1u085trpsmizllbnxxovfwr4suaelqs4ishfur8pimpddcn85xo41kes7qwq2ecovx1lqwyltk1vx6gebfpv0mgyl9td6qscvao440gvbf0v5gi1od37c6p4c9hosqfhsxsacx2lwou4i6uk4ho0dmw39b9vztnm8lsd9lddd12se43a3tmppf08jsdhnd7f840xf00lljjogz0o9dw25aepz1608lltgpxnkchabfxuzrixxlkao5hdps1tlryhsx1xv0rf2gqkb17gwd03micq3d9l11x9jws5h3xf8ss6xbyfbdp684qfhyfda9h5a0fsibd6hqnnxzt51f0xm2f0t84zskw060p1g55fk5ge2zkc1nucywnp866714mbjm0y16ucn8j46je9j5kuvzb3fwnnmrq1lpqhv3z71kuohir1hyyc4smiiz81barpa0ysb7kwwgp2xvd01f2dffmj4eic5ik1g3n == \l\w\k\f\7\0\9\o\1\0\o\c\h\4\4\8\7\t\9\k\q\s\a\p\y\m\8\d\u\k\p\a\h\s\e\3\1\g\x\b\c\o\j\e\h\2\f\d\u\4\n\w\2\m\9\u\e\r\1\j\t\z\q\r\s\v\n\s\0\b\4\8\r\1\i\a\3\g\v\2\0\8\w\z\3\4\j\v\j\8\h\k\0\w\0\e\z\b\6\1\7\9\g\d\r\d\r\x\w\l\k\s\4\5\x\2\z\e\l\j\f\j\c\4\j\2\7\n\k\6\h\z\m\l\a\c\5\e\n\f\h\e\4\7\8\h\c\s\8\p\n\1\h\b\m\n\9\b\3\7\x\9\o\8\j\3\a\h\n\7\o\c\4\2\o\s\d\4\v\p\4\i\6\m\p\u\f\4\h\5\f\d\4\j\g\t\o\4\y\0\f\4\h\7\2\u\q\x\h\m\r\n\s\i\o\g\1\2\p\l\b\6\v\4\l\3\2\j\x\l\c\t\a\j\v\1\1\n\f\3\n\n\u\b\h\8\6\o\h\p\1\9\r\v\1\f\8\4\2\s\m\y\l\d\x\7\t\y\3\2\6\3\3\3\c\4\2\q\r\d\q\x\6\5\f\j\e\x\j\t\w\g\e\0\h\a\j\b\v\6\m\w\w\r\p\6\3\m\2\r\n\q\a\2\4\j\i\o\4\9\r\p\0\k\f\y\p\w\v\g\s\3\i\p\r\n\k\q\w\f\5\8\p\q\5\e\e\r\c\5\l\p\1\y\j\h\n\l\a\w\j\3\e\b\7\o\y\z\f\b\a\y\6\z\d\8\j\2\4\2\h\k\z\w\n\f\h\k\v\c\2\o\u\6\o\t\c\e\r\v\m\j\o\q\5\n\v\n\9\d\r\r\e\p\s\8\5\3\g\f\u\d\0\f\t\w\t\4\d\c\e\f\w\d\f\q\d\a\g\l\y\z\4\8\r\l\q\r\4\f\o\v\r\p\s\5\d\0\l\s\r\k\n\7\e\e\h\k\p\h\z\f\n\e\3\h\0\f\0\8\w\n\s\0\q\b\3\h\8\o\5\o\b\u\p\o\1\2\9\7\o\h\f\m\m\u\r\n\q\q\z\8\s\o\z\3\l\h\n\5\w\0\i\l\a\z\d\2\9\i\e\g\z\3\p\b\f\b\0\q\a\q\j\e\7\6\3\8\e\9\4\g\u\2\q\7\h\m\p\e\d\t\8\w\i\6\b\b\9\i\v\7\j\p\1\v\h\d\k\l\v\q\8\a\w\2\r\y\t\w\k\8\e\9\n\u\k\5\n\k\y\g\8\j\t\q\5\3\b\l\w\q\w\o\i\9\u\5\p\g\v\7\k\g\m\q\0\n\t\4\d\2\l\x\2\v\9\b\3\z\l\p\1\e\s\g\r\j\f\x\m\z\l\h\8\q\p\o\5\k\8\3\a\1\4\c\i\x\c\b\d\0\w\v\8\q\6\w\2\v\3\v\0\z\j\n\e\d\d\j\2\o\3\c\r\7\0\5\m\7\9\e\z\m\z\d\9\5\p\p\7\c\u\k\2\1\2\r\c\p\r\1\q\d\b\u\3\4\a\s\f\y\u\2\v\a\4\t\g\c\k\x\3\n\m\x\i\i\v\1\2\h\q\p\u\i\h\5\e\k\1\m\0\3\7\y\j\u\d\5\d\m\f\l\h\8\e\p\y\y\f\u\5\g\i\a\t\b\a\i\0\v\t\o\a\b\8\a\r\j\i\h\i\e\l\l\b\p\r\l\6\y\5\6\c\e\u\4\m\x\z\u\b\g\w\g\a\8\w\c\m\0\7\8\5\d\9\b\1\t\k\p\g\4\m\c\0\i\f\o\k\0\o\9\3\u\h\e\b\p\r\b\w\8\1\3\d\d\a\i\p\e\x\s\j\i\p\m\n\6\9\3\q\a\p\1\6\l\l\b\f\t\c\w\c\u\5\z\t\i\i\s\a\5\d\m\r\r\z\7\t\g\t\h\5\m\4\f\x\o\c\j\6\k\r\c\0\q\2\u\1\j\1\o\9\r\1\j\d\x\o\x\j\t\8\m\g\u\e\y\8\4\8\r\n\z\q\b\5\g\m\o\y\c\j\4\i\y\t\b\s\z\5\x\d\o\9\y\v\p\9\c\0\x\z\n\3\2\c\7\q\9\w\1\8\q\7\g\m\s\e\0\0\n\0\1\2\j\e\q\l\v\p\i\7\1\1\j\k\f\t\x\x\v\n\b\j\9\1\g\c\z\m\0\x\y\8\t\o\7\q\w\i\n\n\3\q\i\x\i\r\j\h\x\t\l\c\y\n\w\9\b\r\g\u\3\b\q\m\z\z\8\y\v\c\f\2\1\2\r\1\k\5\m\o\u\w\f\b\v\6\a\z\q\y\n\r\l\7\h\u\y\1\c\7\3\6\a\8\5\g\w\x\p\9\r\1\6\1\5\1\8\b\x\7\y\z\3\s\s\z\8\h\h\q\d\s\f\s\9\f\5\b\p\f\b\s\1\9\d\0\i\y\o\7\u\4\n\7\1\9\u\3\l\f\y\u\t\n\b\q\t\q\c\q\8\t\a\j\z\r\8\i\m\7\z\t\z\0\m\g\l\9\j\f\f\z\u\e\c\7\l\y\c\x\4\q\a\0\1\e\j\1\i\h\0\5\a\0\i\v\v\2\3\p\4\p\g\3\y\x\4\1\r\b\y\f\l\i\k\b\k\v\e\a\0\a\4\n\4\h\6\7\n\k\i\n\r\v\l\f\g\l\q\n\8\s\z\1\0\1\n\4\l\d\r\n\e\8\1\b\h\b\s\l\e\w\k\j\e\s\i\u\q\0\4\i\x\v\9\3\l\1\l\s\2\z\h\1\9\j\8\4\u\p\x\0\m\4\q\5\7\r\v\1\a\4\m\r\7\y\y\9\f\s\e\3\7\g\4\c\8\8\r\s\c\2\t\g\6\8\i\7\i\r\t\p\e\v\w\n\9\9\d\r\x\g\h\w\n\h\m\l\r\b\3\j\j\z\s\l\p\8\z\w\t\i\w\j\d\f\x\z\t\f\r\d\z\6\a\j\l\2\e\v\k\2\c\c\j\r\6\w\k\r\0\k\z\9\v\t\3\c\4\q\t\1\j\d\5\u\l\q\c\c\b\b\h\s\8\0\n\t\t\o\a\7\k\m\x\o\t\n\d\b\i\k\l\z\r\u\z\z\v\z\7\w\p\6\i\2\r\1\1\f\m\j\s\m\2\v\a\p\4\b\c\r\c\q\8\c\i\p\0\z\x\q\m\5\z\j\0\h\g\b\4\g\m\h\z\b\0\6\l\l\l\e\r\5\6\i\a\n\x\q\7\z\l\0\m\s\z\9\f\f\z\m\c\n\3\l\f\n\4\0\g\4\h\i\c\8\c\8\g\z\u\3\e\3\q\i\t\2\2\y\r\j\1\h\c\p\v\4\q\1\f\g\l\m\m\j\0\f\m\w\e\4\s\i\6\z\1\o\k\a\5\3\1\u\o\z\8\o\x\y\a\3\5\0\d\c\r\l\m\i\c\m\c\z\b\0\e\i\s\j\4\0\8\n\4\u\q\g\g\x\w\s\5\6\c\c\n\v\m\a\k\y\u\o\c\3\0\s\1\0\r\k\p\a\a\7\0\q\h\2\p\m\j\g\w\6\m\y\b\n\5\p\r\r\b\y\n\z\9\v\z\q\w\d\5\r\6\m\j\s\h\6\x\j\y\p\5\b\w\z\2\1\9\z\8\c\x\6\p\6\4\v\z\3\x\8\j\g\1\3\q\k\b\q\f\y\q\e\a\w\t\v\d\6\v\h\l\z\c\y\4\t\r\4\j\4\f\c\p\q\t\a\i\8\c\0\4\f\e\n\o\c\x\f\7\0\k\e\h\m\7\l\w\3\e\c\6\g\u\i\i\5\w\t\y\0\6\4\m\h\i\8\l\v\p\7\j\t\r\d\p\p\n\v\l\0\b\0\y\t\0\1\2\m\9\h\c\g\z\9\g\w\4\h\x\l\d\v\n\5\w\3\5\5\v\w\f\9\8\r\5\2\e\o\n\q\p\1\s\0\j\r\x\t\t\l\n\a\g\h\7\e\u\x\i\z\f\d\b\9\5\b\r\a\y\1\c\d\7\w\b\w\j\r\c\8\l\2\6\4\l\4\m\o\b\u\v\w\3\1\u\v\i\h\x\j\z\c\0\z\j\0\j\u\8\6\p\u\y\p\d\5\h\2\h\a\n\w\b\x\w\6\t\x\g\j\9\o\u\c\g\9\i\1\u\g\x\s\9\k\v\a\x\c\u\l\y\g\4\m\n\2\i\w\f\2\8\j\b\v\r\8\y\0\t\8\t\7\r\1\y\e\k\0\f\d\u\w\1\k\p\q\w\7\0\f\n\u\f\p\c\f\2\j\t\d\h\m\t\z\n\v\5\3\w\z\9\u\5\y\2\g\l\s\f\z\g\d\r\m\7\0\4\y\h\2\n\7\a\s\9\a\0\q\7\q\9\t\w\1\g\q\h\q\u\l\l\h\1\8\0\7\2\5\l\f\r\i\e\k\v\8\x\n\6\r\q\4\n\n\k\d\m\m\u\4\0\z\j\c\q\y\f\z\f\f\q\g\x\l\u\9\v\m\3\n\r\o\k\q\e\7\e\c\8\g\w\0\t\u\5\2\h\j\x\7\v\f\y\7\c\f\6\6\k\r\l\v\f\s\d\y\m\l\y\w\c\e\p\n\u\9\a\8\2\1\z\9\4\i\8\a\k\c\n\1\b\s\z\h\l\s\v\d\v\6\c\b\c\2\e\6\9\9\3\l\d\t\t\l\k\9\o\x\i\r\n\x\a\n\9\v\d\v\1\j\j\u\f\4\s\w\b\y\n\g\a\l\x\l\u\s\3\9\l\i\k\v\2\g\4\2\4\w\j\m\j\5\k\5\2\x\f\b\2\h\q\8\g\w\s\y\c\c\w\b\s\o\h\5\f\x\1\h\h\8\m\r\t\7\7\i\f\j\c\a\y\9\0\g\f\f\4\r\c\4\t\8\b\y\8\z\8\x\3\n\5\n\1\i\i\u\7\s\4\d\j\j\x\q\k\u\d\s\o\5\q\0\r\j\w\5\d\r\m\x\0\a\x\r\z\v\b\m\c\6\e\4\b\o\q\f\6\e\z\s\i\e\n\5\z\9\j\x\r\9\4\j\g\w\o\u\1\d\v\y\3\6\e\h\w\n\4\x\m\j\x\r\r\w\g\7\t\p\h\x\a\q\3\v\o\f\7\3\3\p\n\q\s\l\5\r\r\2\8\x\a\o\r\i\f\q\w\k\e\j\v\u\s\y\5\x\l\a\w\q\e\1\9\s\7\f\m\b\b\i\h\h\w\p\n\3\i\o\t\h\p\y\k\4\a\o\4\g\l\q\9\z\w\y\t\m\f\9\g\j\u\j\o\t\8\x\l\b\w\3\w\0\c\1\b\4\l\o\2\l\4\r\a\z\e\n\p\f\k\s\n\5\s\k\h\x\z\c\4\o\o\j\t\e\u\p\w\4\e\6\0\u\p\p\5\6\0\0\o\j\r\p\z\8\4\q\o\r\j\7\0\9\5\h\j\p\3\8\e\o\c\6\v\d\8\o\a\p\z\q\0\s\c\f\l\l\l\8\m\7\z\i\q\w\6\g\l\k\f\k\e\g\l\3\7\k\7\6\x\g\q\i\3\y\a\k\6\c\k\n\y\i\1\l\f\e\p\6\4\r\d\g\u\i\h\z\e\z\v\n\4\n\r\g\r\8\u\s\k\c\6\s\9\x\8\l\h\g\h\a\y\o\0\5\r\b\7\z\e\9\3\8\w\c\p\3\7\p\p\w\6\n\t\y\6\5\i\j\v\6\o\7\s\c\x\2\q\0\y\g\q\q\n\q\p\w\c\r\j\x\b\7\5\u\l\h\q\r\g\x\c\c\4\t\5\b\y\9\u\w\3\v\w\8\v\j\8\y\6\w\f\3\a\4\0\p\v\y\n\7\c\s\y\8\w\n\j\q\3\r\j\h\m\a\b\m\z\q\8\3\x\u\m\i\w\p\d\u\1\c\m\3\4\i\x\p\4\m\h\3\h\3\p\6\l\n\e\1\x\t\1\f\t\e\w\s\d\r\m\u\7\p\r\0\k\y\9\r\g\e\5\o\f\k\3\5\x\d\4\3\s\m\9\v\k\i\f\s\k\d\i\8\y\u\a\s\2\2\q\q\l\9\p\f\f\9\r\p\8\o\c\a\7\z\j\h\t\j\g\l\b\6\2\t\r\8\g\m\t\p\y\k\l\k\m\4\9\5\k\4\8\p\b\2\u\3\9\p\k\2\6\0\v\1\v\s\a\1\a\0\1\n\8\6\1\9\w\a\8\t\r\t\u\7\d\5\n\e\k\v\j\9\f\4\9\5\d\3\8\b\z\3\q\y\f\z\5\0\7\w\7\j\7\7\m\2\0\h\o\o\y\5\z\x\c\2\0\2\d\6\b\z\h\m\d\v\4\n\3\0\p\5\l\t\2\j\8\g\r\r\q\0\1\r\g\1\p\4\z\3\f\v\3\c\t\8\1\9\v\r\6\b\b\y\m\w\h\4\v\v\4\9\n\g\i\x\q\p\n\l\z\3\5\w\a\k\o\z\7\s\k\o\a\q\o\f\e\8\w\c\r\d\6\s\n\w\u\h\9\m\k\u\e\9\w\4\d\e\8\v\b\j\h\8\x\6\0\1\g\c\y\5\x\x\h\3\x\x\z\l\5\5\g\s\t\x\x\i\l\f\l\k\1\v\w\4\4\k\8\o\l\o\m\p\u\s\7\d\h\x\r\p\c\u\1\9\k\7\5\h\w\j\f\x\s\0\l\0\l\l\h\d\f\1\c\b\w\7\u\i\z\u\s\t\o\8\r\4\1\z\p\0\l\d\j\u\o\6\3\3\k\s\m\f\k\l\r\4\m\k\7\v\r\t\9\u\d\v\z\f\g\n\e\j\j\j\4\l\5\x\s\i\9\5\c\5\m\x\w\i\w\p\z\4\4\y\n\x\6\h\0\f\b\o\5\7\9\6\2\b\k\m\4\w\7\1\7\h\e\6\k\t\m\3\p\m\t\7\a\o\1\4\a\j\j\e\g\l\o\5\e\4\0\t\o\b\q\4\z\s\k\8\s\8\o\x\j\3\e\m\x\4\u\x\c\w\o\z\f\j\5\0\a\x\n\j\x\y\x\e\s\q\i\0\z\m\4\7\7\s\0\8\n\4\z\m\e\p\f\m\q\k\6\7\0\5\x\k\7\l\u\w\o\7\u\s\9\h\r\p\w\o\9\w\l\c\n\z\t\c\m\n\y\6\k\j\u\f\l\q\w\e\6\b\o\7\y\1\e\3\l\y\g\p\i\x\m\j\i\w\u\i\7\x\u\s\x\l\1\b\x\w\u\j\k\3\7\h\r\p\k\n\p\l\0\k\d\5\z\m\x\i\f\u\m\7\2\m\q\6\y\6\1\z\p\v\w\o\8\0\h\q\s\j\y\x\s\o\f\4\7\j\2\9\7\z\m\v\i\s\1\n\o\7\4\b\v\u\s\0\x\9\o\a\g\8\9\m\x\l\9\0\7\m\5\w\9\8\p\s\n\m\q\w\x\r\t\i\4\o\a\l\p\7\4\4\w\c\4\y\t\o\e\c\d\k\7\f\3\b\s\2\6\g\x\c\i\c\0\i\8\j\w\m\a\j\x\w\b\4\4\a\o\5\z\b\2\s\k\w\q\4\b\j\b\b\3\v\f\b\q\h\h\z\7\w\7\2\e\v\b\v\o\w\w\w\w\p\7\6\y\z\9\s\h\4\6\m\x\9\b\m\n\d\k\t\z\v\h\v\6\t\l\7\n\b\z\m\8\0\9\o\8\7\e\m\9\x\o\n\z\y\c\n\d\l\d\g\3\w\w\w\f\h\d\1\p\w\o\u\u\x\1\u\0\8\5\t\r\p\s\m\i\z\l\l\b\n\x\x\o\v\f\w\r\4\s\u\a\e\l\q\s\4\i\s\h\f\u\r\8\p\i\m\p\d\d\c\n\8\5\x\o\4\1\k\e\s\7\q\w\q\2\e\c\o\v\x\1\l\q\w\y\l\t\k\1\v\x\6\g\e\b\f\p\v\0\m\g\y\l\9\t\d\6\q\s\c\v\a\o\4\4\0\g\v\b\f\0\v\5\g\i\1\o\d\3\7\c\6\p\4\c\9\h\o\s\q\f\h\s\x\s\a\c\x\2\l\w\o\u\4\i\6\u\k\4\h\o\0\d\m\w\3\9\b\9\v\z\t\n\m\8\l\s\d\9\l\d\d\d\1\2\s\e\4\3\a\3\t\m\p\p\f\0\8\j\s\d\h\n\d\7\f\8\4\0\x\f\0\0\l\l\j\j\o\g\z\0\o\9\d\w\2\5\a\e\p\z\1\6\0\8\l\l\t\g\p\x\n\k\c\h\a\b\f\x\u\z\r\i\x\x\l\k\a\o\5\h\d\p\s\1\t\l\r\y\h\s\x\1\x\v\0\r\f\2\g\q\k\b\1\7\g\w\d\0\3\m\i\c\q\3\d\9\l\1\1\x\9\j\w\s\5\h\3\x\f\8\s\s\6\x\b\y\f\b\d\p\6\8\4\q\f\h\y\f\d\a\9\h\5\a\0\f\s\i\b\d\6\h\q\n\n\x\z\t\5\1\f\0\x\m\2\f\0\t\8\4\z\s\k\w\0\6\0\p\1\g\5\5\f\k\5\g\e\2\z\k\c\1\n\u\c\y\w\n\p\8\6\6\7\1\4\m\b\j\m\0\y\1\6\u\c\n\8\j\4\6\j\e\9\j\5\k\u\v\z\b\3\f\w\n\n\m\r\q\1\l\p\q\h\v\3\z\7\1\k\u\o\h\i\r\1\h\y\y\c\4\s\m\i\i\z\8\1\b\a\r\p\a\0\y\s\b\7\k\w\w\g\p\2\x\v\d\0\1\f\2\d\f\f\m\j\4\e\i\c\5\i\k\1\g\3\n ]] 00:09:13.912 00:09:13.912 real 0m4.276s 00:09:13.912 user 0m3.624s 00:09:13.912 sys 0m1.973s 00:09:13.912 04:58:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.912 04:58:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:13.912 ************************************ 00:09:13.912 END TEST dd_rw_offset 00:09:13.912 ************************************ 00:09:13.912 04:58:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:09:13.912 04:58:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:09:13.912 04:58:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:13.912 04:58:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:13.912 04:58:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:09:13.912 04:58:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:13.912 04:58:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:09:13.912 04:58:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:13.912 04:58:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:09:13.912 04:58:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:13.912 04:58:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:13.912 { 00:09:13.912 "subsystems": [ 00:09:13.912 { 00:09:13.912 "subsystem": "bdev", 00:09:13.912 "config": [ 00:09:13.912 { 00:09:13.912 "params": { 00:09:13.912 "trtype": "pcie", 00:09:13.912 "traddr": "0000:00:10.0", 00:09:13.912 "name": "Nvme0" 00:09:13.912 }, 00:09:13.912 "method": "bdev_nvme_attach_controller" 00:09:13.912 }, 00:09:13.912 { 00:09:13.912 "method": "bdev_wait_for_examine" 00:09:13.912 } 00:09:13.912 ] 00:09:13.912 } 00:09:13.912 ] 00:09:13.912 } 00:09:14.171 [2024-07-24 04:58:28.552612] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:14.171 [2024-07-24 04:58:28.552736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64600 ] 00:09:14.171 [2024-07-24 04:58:28.712296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.430 [2024-07-24 04:58:28.927124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.689 [2024-07-24 04:58:29.162001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:16.367  Copying: 1024/1024 [kB] (average 500 MBps) 00:09:16.367 00:09:16.367 04:58:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:16.367 00:09:16.367 real 0m49.691s 00:09:16.367 user 0m41.909s 00:09:16.367 sys 0m21.596s 00:09:16.367 04:58:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.367 ************************************ 00:09:16.367 END TEST spdk_dd_basic_rw 00:09:16.367 ************************************ 00:09:16.367 04:58:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:16.367 04:58:30 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:09:16.367 04:58:30 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:16.367 04:58:30 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.367 04:58:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:16.367 ************************************ 00:09:16.367 START TEST spdk_dd_posix 00:09:16.367 ************************************ 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:09:16.367 * Looking for test storage... 00:09:16.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:09:16.367 * First test run, liburing in use 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:16.367 ************************************ 00:09:16.367 START TEST dd_flag_append 00:09:16.367 ************************************ 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=qwv8lt0yfdwo5eetiu494jlgujh883l1 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=6ja210m7ip63t4v5woyz6xzrkkvtyrai 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s qwv8lt0yfdwo5eetiu494jlgujh883l1 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 6ja210m7ip63t4v5woyz6xzrkkvtyrai 00:09:16.367 04:58:30 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:09:16.367 [2024-07-24 04:58:30.956033] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:16.367 [2024-07-24 04:58:30.956199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64676 ] 00:09:16.625 [2024-07-24 04:58:31.135932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.883 [2024-07-24 04:58:31.355336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.141 [2024-07-24 04:58:31.586594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:18.518  Copying: 32/32 [B] (average 31 kBps) 00:09:18.518 00:09:18.518 04:58:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 6ja210m7ip63t4v5woyz6xzrkkvtyraiqwv8lt0yfdwo5eetiu494jlgujh883l1 == \6\j\a\2\1\0\m\7\i\p\6\3\t\4\v\5\w\o\y\z\6\x\z\r\k\k\v\t\y\r\a\i\q\w\v\8\l\t\0\y\f\d\w\o\5\e\e\t\i\u\4\9\4\j\l\g\u\j\h\8\8\3\l\1 ]] 00:09:18.518 00:09:18.518 real 0m2.149s 00:09:18.518 user 0m1.796s 00:09:18.518 sys 0m1.076s 00:09:18.518 04:58:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.518 04:58:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:18.518 ************************************ 00:09:18.518 END TEST dd_flag_append 00:09:18.518 ************************************ 00:09:18.518 04:58:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:09:18.518 04:58:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:18.518 04:58:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.518 04:58:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:18.518 ************************************ 00:09:18.518 START TEST dd_flag_directory 00:09:18.518 ************************************ 00:09:18.518 04:58:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:09:18.518 04:58:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:18.518 04:58:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:09:18.518 04:58:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:18.518 04:58:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.519 04:58:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.519 04:58:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.519 04:58:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.519 04:58:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.519 04:58:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.519 04:58:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.519 04:58:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:18.519 04:58:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:18.777 [2024-07-24 04:58:33.158709] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:18.777 [2024-07-24 04:58:33.158898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64728 ] 00:09:18.777 [2024-07-24 04:58:33.340418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.034 [2024-07-24 04:58:33.561713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.291 [2024-07-24 04:58:33.787987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:19.291 [2024-07-24 04:58:33.901684] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:19.291 [2024-07-24 04:58:33.901739] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:19.291 [2024-07-24 04:58:33.901786] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:20.226 [2024-07-24 04:58:34.714594] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:20.794 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:09:20.794 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:20.795 04:58:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:20.795 [2024-07-24 04:58:35.283262] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:20.795 [2024-07-24 04:58:35.283430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64755 ] 00:09:21.054 [2024-07-24 04:58:35.464065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.054 [2024-07-24 04:58:35.684470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.313 [2024-07-24 04:58:35.915918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:21.572 [2024-07-24 04:58:36.031757] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:21.572 [2024-07-24 04:58:36.031809] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:21.572 [2024-07-24 04:58:36.031858] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:22.510 [2024-07-24 04:58:36.848216] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:22.769 00:09:22.769 real 0m4.249s 00:09:22.769 user 0m3.534s 00:09:22.769 sys 0m0.494s 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.769 ************************************ 00:09:22.769 END TEST dd_flag_directory 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:09:22.769 ************************************ 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:22.769 ************************************ 00:09:22.769 START TEST dd_flag_nofollow 00:09:22.769 ************************************ 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.769 04:58:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:23.029 [2024-07-24 04:58:37.446171] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:23.029 [2024-07-24 04:58:37.446295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64801 ] 00:09:23.029 [2024-07-24 04:58:37.604526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.288 [2024-07-24 04:58:37.825948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.546 [2024-07-24 04:58:38.052273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:23.546 [2024-07-24 04:58:38.168347] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:23.546 [2024-07-24 04:58:38.168416] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:23.546 [2024-07-24 04:58:38.168450] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:24.481 [2024-07-24 04:58:38.991913] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:25.048 04:58:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:25.048 [2024-07-24 04:58:39.555424] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:25.048 [2024-07-24 04:58:39.555604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64828 ] 00:09:25.307 [2024-07-24 04:58:39.738861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.565 [2024-07-24 04:58:39.958391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.565 [2024-07-24 04:58:40.188870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:25.823 [2024-07-24 04:58:40.308183] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:25.823 [2024-07-24 04:58:40.308243] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:25.823 [2024-07-24 04:58:40.308276] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:26.830 [2024-07-24 04:58:41.131257] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:27.088 04:58:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:09:27.088 04:58:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:27.088 04:58:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:09:27.088 04:58:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:09:27.088 04:58:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:09:27.088 04:58:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:27.088 04:58:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:09:27.088 04:58:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:09:27.088 04:58:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:09:27.088 04:58:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:27.088 [2024-07-24 04:58:41.702005] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:27.088 [2024-07-24 04:58:41.702170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64853 ] 00:09:27.347 [2024-07-24 04:58:41.883236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.606 [2024-07-24 04:58:42.100572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.865 [2024-07-24 04:58:42.333136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:29.243  Copying: 512/512 [B] (average 500 kBps) 00:09:29.243 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ ltogzqa098x9yp35opc5m03ojzav5yi16nt2fdpnsl5ukrsm8badwslnmkkvg6631i6oo0gtnct8w4umdz8oqbde1y82r3lod2fv7ti8xba6ro7f4xlb3uz11d780ta69fgq12cd7ygspvjchrqnnoo3xbc0id291asw0lm4ngh11kmz82khs0qh59p8gcuqd09fi0nxobau4lel5cw0egavgbu6f6xiaysu8tgikws14za7vbinz3gb6f0yaebufuk1nnaqkgfliqmt9c2sa2gcwl98ret34rf7im7v87if70ss1xkre4b5elxcyuiprd231i18qtr827mhews934dsp9pjh45cirzken9ce2v36fh54eljosnmzvnt3zqkmlm8sco1g23v3xdi0d9trs4z0l436q8wmuyorexxoslv4hyh4k37n50n38au0twaau7915j16s2gr6q5g5h5wap2b0idzpa6autslot2chjbpd99ddds6k9e9lqqziwn == \l\t\o\g\z\q\a\0\9\8\x\9\y\p\3\5\o\p\c\5\m\0\3\o\j\z\a\v\5\y\i\1\6\n\t\2\f\d\p\n\s\l\5\u\k\r\s\m\8\b\a\d\w\s\l\n\m\k\k\v\g\6\6\3\1\i\6\o\o\0\g\t\n\c\t\8\w\4\u\m\d\z\8\o\q\b\d\e\1\y\8\2\r\3\l\o\d\2\f\v\7\t\i\8\x\b\a\6\r\o\7\f\4\x\l\b\3\u\z\1\1\d\7\8\0\t\a\6\9\f\g\q\1\2\c\d\7\y\g\s\p\v\j\c\h\r\q\n\n\o\o\3\x\b\c\0\i\d\2\9\1\a\s\w\0\l\m\4\n\g\h\1\1\k\m\z\8\2\k\h\s\0\q\h\5\9\p\8\g\c\u\q\d\0\9\f\i\0\n\x\o\b\a\u\4\l\e\l\5\c\w\0\e\g\a\v\g\b\u\6\f\6\x\i\a\y\s\u\8\t\g\i\k\w\s\1\4\z\a\7\v\b\i\n\z\3\g\b\6\f\0\y\a\e\b\u\f\u\k\1\n\n\a\q\k\g\f\l\i\q\m\t\9\c\2\s\a\2\g\c\w\l\9\8\r\e\t\3\4\r\f\7\i\m\7\v\8\7\i\f\7\0\s\s\1\x\k\r\e\4\b\5\e\l\x\c\y\u\i\p\r\d\2\3\1\i\1\8\q\t\r\8\2\7\m\h\e\w\s\9\3\4\d\s\p\9\p\j\h\4\5\c\i\r\z\k\e\n\9\c\e\2\v\3\6\f\h\5\4\e\l\j\o\s\n\m\z\v\n\t\3\z\q\k\m\l\m\8\s\c\o\1\g\2\3\v\3\x\d\i\0\d\9\t\r\s\4\z\0\l\4\3\6\q\8\w\m\u\y\o\r\e\x\x\o\s\l\v\4\h\y\h\4\k\3\7\n\5\0\n\3\8\a\u\0\t\w\a\a\u\7\9\1\5\j\1\6\s\2\g\r\6\q\5\g\5\h\5\w\a\p\2\b\0\i\d\z\p\a\6\a\u\t\s\l\o\t\2\c\h\j\b\p\d\9\9\d\d\d\s\6\k\9\e\9\l\q\q\z\i\w\n ]] 00:09:29.243 00:09:29.243 real 0m6.366s 00:09:29.243 user 0m5.295s 00:09:29.243 sys 0m1.564s 00:09:29.243 ************************************ 00:09:29.243 END TEST dd_flag_nofollow 00:09:29.243 ************************************ 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:29.243 ************************************ 00:09:29.243 START TEST dd_flag_noatime 00:09:29.243 ************************************ 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721797122 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721797123 00:09:29.243 04:58:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:09:30.181 04:58:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:30.440 [2024-07-24 04:58:44.921642] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:30.440 [2024-07-24 04:58:44.921813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64913 ] 00:09:30.699 [2024-07-24 04:58:45.101245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.699 [2024-07-24 04:58:45.321588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.958 [2024-07-24 04:58:45.556910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:32.597  Copying: 512/512 [B] (average 500 kBps) 00:09:32.597 00:09:32.597 04:58:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:32.597 04:58:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721797122 )) 00:09:32.597 04:58:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:32.597 04:58:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721797123 )) 00:09:32.597 04:58:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:32.597 [2024-07-24 04:58:47.073373] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:32.597 [2024-07-24 04:58:47.073531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64944 ] 00:09:32.856 [2024-07-24 04:58:47.255138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.856 [2024-07-24 04:58:47.471833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.115 [2024-07-24 04:58:47.709327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:34.752  Copying: 512/512 [B] (average 500 kBps) 00:09:34.752 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721797127 )) 00:09:34.752 00:09:34.752 real 0m5.332s 00:09:34.752 user 0m3.569s 00:09:34.752 sys 0m2.194s 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.752 ************************************ 00:09:34.752 END TEST dd_flag_noatime 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:09:34.752 ************************************ 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:34.752 ************************************ 00:09:34.752 START TEST dd_flags_misc 00:09:34.752 ************************************ 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:34.752 04:58:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:34.752 [2024-07-24 04:58:49.260270] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:34.752 [2024-07-24 04:58:49.260394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64991 ] 00:09:35.011 [2024-07-24 04:58:49.419816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.011 [2024-07-24 04:58:49.640268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.270 [2024-07-24 04:58:49.861772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:36.906  Copying: 512/512 [B] (average 500 kBps) 00:09:36.906 00:09:36.907 04:58:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ abh7lab1thxnje7erkz1pehd02xf7du2e7qrj1dxbvvlhbcb8as7h3627s6kwhjo3canfvd255kcuoju6t0eapuoh7uiyto31te4jro3nquwcj6w1kdys5etlunzm3g9oca1apd0x6qziudn5get6vnzpacgbv898w3uxpj4m53u13jnznvoatk8zorc49hmepho11ueup8mzma9t3svhu452ajw6k9vmuwa13h47ye4u4e5optypzq9xjatva139pm3048db1zerjl70id0b45ns2obc61axrr6hiy25ysn5dr6dc0oifwdynz0belqrimyrdafde5notw6pp45uuvasu66au40ilk2890r15enxg0vbjx0qm6nq93t6hgr86783c3wa0l1a0d087l4dx1jt6xthuml5xsawp3ndx0laqgywvgpjlcohezit3cqegw6b79h83wyuiksj568hdf37b3svqvhq49idwm2bmpwr8nnf9hipjcm3r4ps6ah == \a\b\h\7\l\a\b\1\t\h\x\n\j\e\7\e\r\k\z\1\p\e\h\d\0\2\x\f\7\d\u\2\e\7\q\r\j\1\d\x\b\v\v\l\h\b\c\b\8\a\s\7\h\3\6\2\7\s\6\k\w\h\j\o\3\c\a\n\f\v\d\2\5\5\k\c\u\o\j\u\6\t\0\e\a\p\u\o\h\7\u\i\y\t\o\3\1\t\e\4\j\r\o\3\n\q\u\w\c\j\6\w\1\k\d\y\s\5\e\t\l\u\n\z\m\3\g\9\o\c\a\1\a\p\d\0\x\6\q\z\i\u\d\n\5\g\e\t\6\v\n\z\p\a\c\g\b\v\8\9\8\w\3\u\x\p\j\4\m\5\3\u\1\3\j\n\z\n\v\o\a\t\k\8\z\o\r\c\4\9\h\m\e\p\h\o\1\1\u\e\u\p\8\m\z\m\a\9\t\3\s\v\h\u\4\5\2\a\j\w\6\k\9\v\m\u\w\a\1\3\h\4\7\y\e\4\u\4\e\5\o\p\t\y\p\z\q\9\x\j\a\t\v\a\1\3\9\p\m\3\0\4\8\d\b\1\z\e\r\j\l\7\0\i\d\0\b\4\5\n\s\2\o\b\c\6\1\a\x\r\r\6\h\i\y\2\5\y\s\n\5\d\r\6\d\c\0\o\i\f\w\d\y\n\z\0\b\e\l\q\r\i\m\y\r\d\a\f\d\e\5\n\o\t\w\6\p\p\4\5\u\u\v\a\s\u\6\6\a\u\4\0\i\l\k\2\8\9\0\r\1\5\e\n\x\g\0\v\b\j\x\0\q\m\6\n\q\9\3\t\6\h\g\r\8\6\7\8\3\c\3\w\a\0\l\1\a\0\d\0\8\7\l\4\d\x\1\j\t\6\x\t\h\u\m\l\5\x\s\a\w\p\3\n\d\x\0\l\a\q\g\y\w\v\g\p\j\l\c\o\h\e\z\i\t\3\c\q\e\g\w\6\b\7\9\h\8\3\w\y\u\i\k\s\j\5\6\8\h\d\f\3\7\b\3\s\v\q\v\h\q\4\9\i\d\w\m\2\b\m\p\w\r\8\n\n\f\9\h\i\p\j\c\m\3\r\4\p\s\6\a\h ]] 00:09:36.907 04:58:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:36.907 04:58:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:36.907 [2024-07-24 04:58:51.371327] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:36.907 [2024-07-24 04:58:51.371485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65018 ] 00:09:37.166 [2024-07-24 04:58:51.548515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.166 [2024-07-24 04:58:51.760780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.425 [2024-07-24 04:58:51.993520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:39.125  Copying: 512/512 [B] (average 500 kBps) 00:09:39.125 00:09:39.125 04:58:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ abh7lab1thxnje7erkz1pehd02xf7du2e7qrj1dxbvvlhbcb8as7h3627s6kwhjo3canfvd255kcuoju6t0eapuoh7uiyto31te4jro3nquwcj6w1kdys5etlunzm3g9oca1apd0x6qziudn5get6vnzpacgbv898w3uxpj4m53u13jnznvoatk8zorc49hmepho11ueup8mzma9t3svhu452ajw6k9vmuwa13h47ye4u4e5optypzq9xjatva139pm3048db1zerjl70id0b45ns2obc61axrr6hiy25ysn5dr6dc0oifwdynz0belqrimyrdafde5notw6pp45uuvasu66au40ilk2890r15enxg0vbjx0qm6nq93t6hgr86783c3wa0l1a0d087l4dx1jt6xthuml5xsawp3ndx0laqgywvgpjlcohezit3cqegw6b79h83wyuiksj568hdf37b3svqvhq49idwm2bmpwr8nnf9hipjcm3r4ps6ah == \a\b\h\7\l\a\b\1\t\h\x\n\j\e\7\e\r\k\z\1\p\e\h\d\0\2\x\f\7\d\u\2\e\7\q\r\j\1\d\x\b\v\v\l\h\b\c\b\8\a\s\7\h\3\6\2\7\s\6\k\w\h\j\o\3\c\a\n\f\v\d\2\5\5\k\c\u\o\j\u\6\t\0\e\a\p\u\o\h\7\u\i\y\t\o\3\1\t\e\4\j\r\o\3\n\q\u\w\c\j\6\w\1\k\d\y\s\5\e\t\l\u\n\z\m\3\g\9\o\c\a\1\a\p\d\0\x\6\q\z\i\u\d\n\5\g\e\t\6\v\n\z\p\a\c\g\b\v\8\9\8\w\3\u\x\p\j\4\m\5\3\u\1\3\j\n\z\n\v\o\a\t\k\8\z\o\r\c\4\9\h\m\e\p\h\o\1\1\u\e\u\p\8\m\z\m\a\9\t\3\s\v\h\u\4\5\2\a\j\w\6\k\9\v\m\u\w\a\1\3\h\4\7\y\e\4\u\4\e\5\o\p\t\y\p\z\q\9\x\j\a\t\v\a\1\3\9\p\m\3\0\4\8\d\b\1\z\e\r\j\l\7\0\i\d\0\b\4\5\n\s\2\o\b\c\6\1\a\x\r\r\6\h\i\y\2\5\y\s\n\5\d\r\6\d\c\0\o\i\f\w\d\y\n\z\0\b\e\l\q\r\i\m\y\r\d\a\f\d\e\5\n\o\t\w\6\p\p\4\5\u\u\v\a\s\u\6\6\a\u\4\0\i\l\k\2\8\9\0\r\1\5\e\n\x\g\0\v\b\j\x\0\q\m\6\n\q\9\3\t\6\h\g\r\8\6\7\8\3\c\3\w\a\0\l\1\a\0\d\0\8\7\l\4\d\x\1\j\t\6\x\t\h\u\m\l\5\x\s\a\w\p\3\n\d\x\0\l\a\q\g\y\w\v\g\p\j\l\c\o\h\e\z\i\t\3\c\q\e\g\w\6\b\7\9\h\8\3\w\y\u\i\k\s\j\5\6\8\h\d\f\3\7\b\3\s\v\q\v\h\q\4\9\i\d\w\m\2\b\m\p\w\r\8\n\n\f\9\h\i\p\j\c\m\3\r\4\p\s\6\a\h ]] 00:09:39.125 04:58:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:39.125 04:58:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:39.125 [2024-07-24 04:58:53.505577] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:39.125 [2024-07-24 04:58:53.505735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65045 ] 00:09:39.125 [2024-07-24 04:58:53.685295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.383 [2024-07-24 04:58:53.902352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.642 [2024-07-24 04:58:54.138745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:41.021  Copying: 512/512 [B] (average 500 kBps) 00:09:41.021 00:09:41.021 04:58:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ abh7lab1thxnje7erkz1pehd02xf7du2e7qrj1dxbvvlhbcb8as7h3627s6kwhjo3canfvd255kcuoju6t0eapuoh7uiyto31te4jro3nquwcj6w1kdys5etlunzm3g9oca1apd0x6qziudn5get6vnzpacgbv898w3uxpj4m53u13jnznvoatk8zorc49hmepho11ueup8mzma9t3svhu452ajw6k9vmuwa13h47ye4u4e5optypzq9xjatva139pm3048db1zerjl70id0b45ns2obc61axrr6hiy25ysn5dr6dc0oifwdynz0belqrimyrdafde5notw6pp45uuvasu66au40ilk2890r15enxg0vbjx0qm6nq93t6hgr86783c3wa0l1a0d087l4dx1jt6xthuml5xsawp3ndx0laqgywvgpjlcohezit3cqegw6b79h83wyuiksj568hdf37b3svqvhq49idwm2bmpwr8nnf9hipjcm3r4ps6ah == \a\b\h\7\l\a\b\1\t\h\x\n\j\e\7\e\r\k\z\1\p\e\h\d\0\2\x\f\7\d\u\2\e\7\q\r\j\1\d\x\b\v\v\l\h\b\c\b\8\a\s\7\h\3\6\2\7\s\6\k\w\h\j\o\3\c\a\n\f\v\d\2\5\5\k\c\u\o\j\u\6\t\0\e\a\p\u\o\h\7\u\i\y\t\o\3\1\t\e\4\j\r\o\3\n\q\u\w\c\j\6\w\1\k\d\y\s\5\e\t\l\u\n\z\m\3\g\9\o\c\a\1\a\p\d\0\x\6\q\z\i\u\d\n\5\g\e\t\6\v\n\z\p\a\c\g\b\v\8\9\8\w\3\u\x\p\j\4\m\5\3\u\1\3\j\n\z\n\v\o\a\t\k\8\z\o\r\c\4\9\h\m\e\p\h\o\1\1\u\e\u\p\8\m\z\m\a\9\t\3\s\v\h\u\4\5\2\a\j\w\6\k\9\v\m\u\w\a\1\3\h\4\7\y\e\4\u\4\e\5\o\p\t\y\p\z\q\9\x\j\a\t\v\a\1\3\9\p\m\3\0\4\8\d\b\1\z\e\r\j\l\7\0\i\d\0\b\4\5\n\s\2\o\b\c\6\1\a\x\r\r\6\h\i\y\2\5\y\s\n\5\d\r\6\d\c\0\o\i\f\w\d\y\n\z\0\b\e\l\q\r\i\m\y\r\d\a\f\d\e\5\n\o\t\w\6\p\p\4\5\u\u\v\a\s\u\6\6\a\u\4\0\i\l\k\2\8\9\0\r\1\5\e\n\x\g\0\v\b\j\x\0\q\m\6\n\q\9\3\t\6\h\g\r\8\6\7\8\3\c\3\w\a\0\l\1\a\0\d\0\8\7\l\4\d\x\1\j\t\6\x\t\h\u\m\l\5\x\s\a\w\p\3\n\d\x\0\l\a\q\g\y\w\v\g\p\j\l\c\o\h\e\z\i\t\3\c\q\e\g\w\6\b\7\9\h\8\3\w\y\u\i\k\s\j\5\6\8\h\d\f\3\7\b\3\s\v\q\v\h\q\4\9\i\d\w\m\2\b\m\p\w\r\8\n\n\f\9\h\i\p\j\c\m\3\r\4\p\s\6\a\h ]] 00:09:41.021 04:58:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:41.021 04:58:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:41.021 [2024-07-24 04:58:55.641153] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:41.021 [2024-07-24 04:58:55.641307] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65072 ] 00:09:41.280 [2024-07-24 04:58:55.819900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.539 [2024-07-24 04:58:56.032290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.799 [2024-07-24 04:58:56.269757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:43.178  Copying: 512/512 [B] (average 250 kBps) 00:09:43.178 00:09:43.178 04:58:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ abh7lab1thxnje7erkz1pehd02xf7du2e7qrj1dxbvvlhbcb8as7h3627s6kwhjo3canfvd255kcuoju6t0eapuoh7uiyto31te4jro3nquwcj6w1kdys5etlunzm3g9oca1apd0x6qziudn5get6vnzpacgbv898w3uxpj4m53u13jnznvoatk8zorc49hmepho11ueup8mzma9t3svhu452ajw6k9vmuwa13h47ye4u4e5optypzq9xjatva139pm3048db1zerjl70id0b45ns2obc61axrr6hiy25ysn5dr6dc0oifwdynz0belqrimyrdafde5notw6pp45uuvasu66au40ilk2890r15enxg0vbjx0qm6nq93t6hgr86783c3wa0l1a0d087l4dx1jt6xthuml5xsawp3ndx0laqgywvgpjlcohezit3cqegw6b79h83wyuiksj568hdf37b3svqvhq49idwm2bmpwr8nnf9hipjcm3r4ps6ah == \a\b\h\7\l\a\b\1\t\h\x\n\j\e\7\e\r\k\z\1\p\e\h\d\0\2\x\f\7\d\u\2\e\7\q\r\j\1\d\x\b\v\v\l\h\b\c\b\8\a\s\7\h\3\6\2\7\s\6\k\w\h\j\o\3\c\a\n\f\v\d\2\5\5\k\c\u\o\j\u\6\t\0\e\a\p\u\o\h\7\u\i\y\t\o\3\1\t\e\4\j\r\o\3\n\q\u\w\c\j\6\w\1\k\d\y\s\5\e\t\l\u\n\z\m\3\g\9\o\c\a\1\a\p\d\0\x\6\q\z\i\u\d\n\5\g\e\t\6\v\n\z\p\a\c\g\b\v\8\9\8\w\3\u\x\p\j\4\m\5\3\u\1\3\j\n\z\n\v\o\a\t\k\8\z\o\r\c\4\9\h\m\e\p\h\o\1\1\u\e\u\p\8\m\z\m\a\9\t\3\s\v\h\u\4\5\2\a\j\w\6\k\9\v\m\u\w\a\1\3\h\4\7\y\e\4\u\4\e\5\o\p\t\y\p\z\q\9\x\j\a\t\v\a\1\3\9\p\m\3\0\4\8\d\b\1\z\e\r\j\l\7\0\i\d\0\b\4\5\n\s\2\o\b\c\6\1\a\x\r\r\6\h\i\y\2\5\y\s\n\5\d\r\6\d\c\0\o\i\f\w\d\y\n\z\0\b\e\l\q\r\i\m\y\r\d\a\f\d\e\5\n\o\t\w\6\p\p\4\5\u\u\v\a\s\u\6\6\a\u\4\0\i\l\k\2\8\9\0\r\1\5\e\n\x\g\0\v\b\j\x\0\q\m\6\n\q\9\3\t\6\h\g\r\8\6\7\8\3\c\3\w\a\0\l\1\a\0\d\0\8\7\l\4\d\x\1\j\t\6\x\t\h\u\m\l\5\x\s\a\w\p\3\n\d\x\0\l\a\q\g\y\w\v\g\p\j\l\c\o\h\e\z\i\t\3\c\q\e\g\w\6\b\7\9\h\8\3\w\y\u\i\k\s\j\5\6\8\h\d\f\3\7\b\3\s\v\q\v\h\q\4\9\i\d\w\m\2\b\m\p\w\r\8\n\n\f\9\h\i\p\j\c\m\3\r\4\p\s\6\a\h ]] 00:09:43.178 04:58:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:43.178 04:58:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:09:43.178 04:58:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:09:43.178 04:58:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:43.178 04:58:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:43.178 04:58:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:43.178 [2024-07-24 04:58:57.756033] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:43.178 [2024-07-24 04:58:57.756137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65105 ] 00:09:43.437 [2024-07-24 04:58:57.916263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.695 [2024-07-24 04:58:58.129220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.953 [2024-07-24 04:58:58.359314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:45.331  Copying: 512/512 [B] (average 500 kBps) 00:09:45.331 00:09:45.331 04:58:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1mlzve8ph8gpizrk3werhv1s3yd22wvsl23v9ceospngp0kz2sm48wk4ikiblxjxum0t26lk87wukgmoz787k5npbqmuaxy058qa3o1gf8ql4gnr7st8x5oc9dedma6a90omwt2cjl9656745nmmx44f4ismvpudlk1xwtzymzaqgctv4kgwzxibt1j4jwzy1hbfoil20zat9eeuekv4544rsjf52h63fx6yebjwlvj4ddukbozdhecfuqr1zi25xvwyscgugh6edsmmflhgjitq6vsvdet8shlh3q2ur0y1lw688lb9u8d47urea4jfbuc4zbaza5zbuav10ihifrhkdxxgbtybpx8lzvmy1aj1dpzubovuwyonn4p9g4myguwou2fsnhf80ve0u6d2jxnnr5qi921n1abziob2zpbbp6hux6tqngro0d8asvfyy8vfbhfxn9ib3drtmdoffzl3vezteuwtyqyp2z2dildb3krnh7krqzv9wbnt8x7x == \1\m\l\z\v\e\8\p\h\8\g\p\i\z\r\k\3\w\e\r\h\v\1\s\3\y\d\2\2\w\v\s\l\2\3\v\9\c\e\o\s\p\n\g\p\0\k\z\2\s\m\4\8\w\k\4\i\k\i\b\l\x\j\x\u\m\0\t\2\6\l\k\8\7\w\u\k\g\m\o\z\7\8\7\k\5\n\p\b\q\m\u\a\x\y\0\5\8\q\a\3\o\1\g\f\8\q\l\4\g\n\r\7\s\t\8\x\5\o\c\9\d\e\d\m\a\6\a\9\0\o\m\w\t\2\c\j\l\9\6\5\6\7\4\5\n\m\m\x\4\4\f\4\i\s\m\v\p\u\d\l\k\1\x\w\t\z\y\m\z\a\q\g\c\t\v\4\k\g\w\z\x\i\b\t\1\j\4\j\w\z\y\1\h\b\f\o\i\l\2\0\z\a\t\9\e\e\u\e\k\v\4\5\4\4\r\s\j\f\5\2\h\6\3\f\x\6\y\e\b\j\w\l\v\j\4\d\d\u\k\b\o\z\d\h\e\c\f\u\q\r\1\z\i\2\5\x\v\w\y\s\c\g\u\g\h\6\e\d\s\m\m\f\l\h\g\j\i\t\q\6\v\s\v\d\e\t\8\s\h\l\h\3\q\2\u\r\0\y\1\l\w\6\8\8\l\b\9\u\8\d\4\7\u\r\e\a\4\j\f\b\u\c\4\z\b\a\z\a\5\z\b\u\a\v\1\0\i\h\i\f\r\h\k\d\x\x\g\b\t\y\b\p\x\8\l\z\v\m\y\1\a\j\1\d\p\z\u\b\o\v\u\w\y\o\n\n\4\p\9\g\4\m\y\g\u\w\o\u\2\f\s\n\h\f\8\0\v\e\0\u\6\d\2\j\x\n\n\r\5\q\i\9\2\1\n\1\a\b\z\i\o\b\2\z\p\b\b\p\6\h\u\x\6\t\q\n\g\r\o\0\d\8\a\s\v\f\y\y\8\v\f\b\h\f\x\n\9\i\b\3\d\r\t\m\d\o\f\f\z\l\3\v\e\z\t\e\u\w\t\y\q\y\p\2\z\2\d\i\l\d\b\3\k\r\n\h\7\k\r\q\z\v\9\w\b\n\t\8\x\7\x ]] 00:09:45.331 04:58:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:45.331 04:58:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:45.331 [2024-07-24 04:58:59.871942] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:45.331 [2024-07-24 04:58:59.872115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65132 ] 00:09:45.589 [2024-07-24 04:59:00.055791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.847 [2024-07-24 04:59:00.275606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.105 [2024-07-24 04:59:00.506780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:47.482  Copying: 512/512 [B] (average 500 kBps) 00:09:47.482 00:09:47.482 04:59:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1mlzve8ph8gpizrk3werhv1s3yd22wvsl23v9ceospngp0kz2sm48wk4ikiblxjxum0t26lk87wukgmoz787k5npbqmuaxy058qa3o1gf8ql4gnr7st8x5oc9dedma6a90omwt2cjl9656745nmmx44f4ismvpudlk1xwtzymzaqgctv4kgwzxibt1j4jwzy1hbfoil20zat9eeuekv4544rsjf52h63fx6yebjwlvj4ddukbozdhecfuqr1zi25xvwyscgugh6edsmmflhgjitq6vsvdet8shlh3q2ur0y1lw688lb9u8d47urea4jfbuc4zbaza5zbuav10ihifrhkdxxgbtybpx8lzvmy1aj1dpzubovuwyonn4p9g4myguwou2fsnhf80ve0u6d2jxnnr5qi921n1abziob2zpbbp6hux6tqngro0d8asvfyy8vfbhfxn9ib3drtmdoffzl3vezteuwtyqyp2z2dildb3krnh7krqzv9wbnt8x7x == \1\m\l\z\v\e\8\p\h\8\g\p\i\z\r\k\3\w\e\r\h\v\1\s\3\y\d\2\2\w\v\s\l\2\3\v\9\c\e\o\s\p\n\g\p\0\k\z\2\s\m\4\8\w\k\4\i\k\i\b\l\x\j\x\u\m\0\t\2\6\l\k\8\7\w\u\k\g\m\o\z\7\8\7\k\5\n\p\b\q\m\u\a\x\y\0\5\8\q\a\3\o\1\g\f\8\q\l\4\g\n\r\7\s\t\8\x\5\o\c\9\d\e\d\m\a\6\a\9\0\o\m\w\t\2\c\j\l\9\6\5\6\7\4\5\n\m\m\x\4\4\f\4\i\s\m\v\p\u\d\l\k\1\x\w\t\z\y\m\z\a\q\g\c\t\v\4\k\g\w\z\x\i\b\t\1\j\4\j\w\z\y\1\h\b\f\o\i\l\2\0\z\a\t\9\e\e\u\e\k\v\4\5\4\4\r\s\j\f\5\2\h\6\3\f\x\6\y\e\b\j\w\l\v\j\4\d\d\u\k\b\o\z\d\h\e\c\f\u\q\r\1\z\i\2\5\x\v\w\y\s\c\g\u\g\h\6\e\d\s\m\m\f\l\h\g\j\i\t\q\6\v\s\v\d\e\t\8\s\h\l\h\3\q\2\u\r\0\y\1\l\w\6\8\8\l\b\9\u\8\d\4\7\u\r\e\a\4\j\f\b\u\c\4\z\b\a\z\a\5\z\b\u\a\v\1\0\i\h\i\f\r\h\k\d\x\x\g\b\t\y\b\p\x\8\l\z\v\m\y\1\a\j\1\d\p\z\u\b\o\v\u\w\y\o\n\n\4\p\9\g\4\m\y\g\u\w\o\u\2\f\s\n\h\f\8\0\v\e\0\u\6\d\2\j\x\n\n\r\5\q\i\9\2\1\n\1\a\b\z\i\o\b\2\z\p\b\b\p\6\h\u\x\6\t\q\n\g\r\o\0\d\8\a\s\v\f\y\y\8\v\f\b\h\f\x\n\9\i\b\3\d\r\t\m\d\o\f\f\z\l\3\v\e\z\t\e\u\w\t\y\q\y\p\2\z\2\d\i\l\d\b\3\k\r\n\h\7\k\r\q\z\v\9\w\b\n\t\8\x\7\x ]] 00:09:47.482 04:59:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:47.482 04:59:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:47.482 [2024-07-24 04:59:01.978488] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:47.482 [2024-07-24 04:59:01.978663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65160 ] 00:09:47.741 [2024-07-24 04:59:02.138099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.741 [2024-07-24 04:59:02.353836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.000 [2024-07-24 04:59:02.580141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:49.635  Copying: 512/512 [B] (average 125 kBps) 00:09:49.635 00:09:49.635 04:59:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1mlzve8ph8gpizrk3werhv1s3yd22wvsl23v9ceospngp0kz2sm48wk4ikiblxjxum0t26lk87wukgmoz787k5npbqmuaxy058qa3o1gf8ql4gnr7st8x5oc9dedma6a90omwt2cjl9656745nmmx44f4ismvpudlk1xwtzymzaqgctv4kgwzxibt1j4jwzy1hbfoil20zat9eeuekv4544rsjf52h63fx6yebjwlvj4ddukbozdhecfuqr1zi25xvwyscgugh6edsmmflhgjitq6vsvdet8shlh3q2ur0y1lw688lb9u8d47urea4jfbuc4zbaza5zbuav10ihifrhkdxxgbtybpx8lzvmy1aj1dpzubovuwyonn4p9g4myguwou2fsnhf80ve0u6d2jxnnr5qi921n1abziob2zpbbp6hux6tqngro0d8asvfyy8vfbhfxn9ib3drtmdoffzl3vezteuwtyqyp2z2dildb3krnh7krqzv9wbnt8x7x == \1\m\l\z\v\e\8\p\h\8\g\p\i\z\r\k\3\w\e\r\h\v\1\s\3\y\d\2\2\w\v\s\l\2\3\v\9\c\e\o\s\p\n\g\p\0\k\z\2\s\m\4\8\w\k\4\i\k\i\b\l\x\j\x\u\m\0\t\2\6\l\k\8\7\w\u\k\g\m\o\z\7\8\7\k\5\n\p\b\q\m\u\a\x\y\0\5\8\q\a\3\o\1\g\f\8\q\l\4\g\n\r\7\s\t\8\x\5\o\c\9\d\e\d\m\a\6\a\9\0\o\m\w\t\2\c\j\l\9\6\5\6\7\4\5\n\m\m\x\4\4\f\4\i\s\m\v\p\u\d\l\k\1\x\w\t\z\y\m\z\a\q\g\c\t\v\4\k\g\w\z\x\i\b\t\1\j\4\j\w\z\y\1\h\b\f\o\i\l\2\0\z\a\t\9\e\e\u\e\k\v\4\5\4\4\r\s\j\f\5\2\h\6\3\f\x\6\y\e\b\j\w\l\v\j\4\d\d\u\k\b\o\z\d\h\e\c\f\u\q\r\1\z\i\2\5\x\v\w\y\s\c\g\u\g\h\6\e\d\s\m\m\f\l\h\g\j\i\t\q\6\v\s\v\d\e\t\8\s\h\l\h\3\q\2\u\r\0\y\1\l\w\6\8\8\l\b\9\u\8\d\4\7\u\r\e\a\4\j\f\b\u\c\4\z\b\a\z\a\5\z\b\u\a\v\1\0\i\h\i\f\r\h\k\d\x\x\g\b\t\y\b\p\x\8\l\z\v\m\y\1\a\j\1\d\p\z\u\b\o\v\u\w\y\o\n\n\4\p\9\g\4\m\y\g\u\w\o\u\2\f\s\n\h\f\8\0\v\e\0\u\6\d\2\j\x\n\n\r\5\q\i\9\2\1\n\1\a\b\z\i\o\b\2\z\p\b\b\p\6\h\u\x\6\t\q\n\g\r\o\0\d\8\a\s\v\f\y\y\8\v\f\b\h\f\x\n\9\i\b\3\d\r\t\m\d\o\f\f\z\l\3\v\e\z\t\e\u\w\t\y\q\y\p\2\z\2\d\i\l\d\b\3\k\r\n\h\7\k\r\q\z\v\9\w\b\n\t\8\x\7\x ]] 00:09:49.635 04:59:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:49.635 04:59:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:49.635 [2024-07-24 04:59:04.096812] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:49.635 [2024-07-24 04:59:04.096988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65187 ] 00:09:49.894 [2024-07-24 04:59:04.280551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.894 [2024-07-24 04:59:04.500105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.153 [2024-07-24 04:59:04.727926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:51.800  Copying: 512/512 [B] (average 250 kBps) 00:09:51.800 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1mlzve8ph8gpizrk3werhv1s3yd22wvsl23v9ceospngp0kz2sm48wk4ikiblxjxum0t26lk87wukgmoz787k5npbqmuaxy058qa3o1gf8ql4gnr7st8x5oc9dedma6a90omwt2cjl9656745nmmx44f4ismvpudlk1xwtzymzaqgctv4kgwzxibt1j4jwzy1hbfoil20zat9eeuekv4544rsjf52h63fx6yebjwlvj4ddukbozdhecfuqr1zi25xvwyscgugh6edsmmflhgjitq6vsvdet8shlh3q2ur0y1lw688lb9u8d47urea4jfbuc4zbaza5zbuav10ihifrhkdxxgbtybpx8lzvmy1aj1dpzubovuwyonn4p9g4myguwou2fsnhf80ve0u6d2jxnnr5qi921n1abziob2zpbbp6hux6tqngro0d8asvfyy8vfbhfxn9ib3drtmdoffzl3vezteuwtyqyp2z2dildb3krnh7krqzv9wbnt8x7x == \1\m\l\z\v\e\8\p\h\8\g\p\i\z\r\k\3\w\e\r\h\v\1\s\3\y\d\2\2\w\v\s\l\2\3\v\9\c\e\o\s\p\n\g\p\0\k\z\2\s\m\4\8\w\k\4\i\k\i\b\l\x\j\x\u\m\0\t\2\6\l\k\8\7\w\u\k\g\m\o\z\7\8\7\k\5\n\p\b\q\m\u\a\x\y\0\5\8\q\a\3\o\1\g\f\8\q\l\4\g\n\r\7\s\t\8\x\5\o\c\9\d\e\d\m\a\6\a\9\0\o\m\w\t\2\c\j\l\9\6\5\6\7\4\5\n\m\m\x\4\4\f\4\i\s\m\v\p\u\d\l\k\1\x\w\t\z\y\m\z\a\q\g\c\t\v\4\k\g\w\z\x\i\b\t\1\j\4\j\w\z\y\1\h\b\f\o\i\l\2\0\z\a\t\9\e\e\u\e\k\v\4\5\4\4\r\s\j\f\5\2\h\6\3\f\x\6\y\e\b\j\w\l\v\j\4\d\d\u\k\b\o\z\d\h\e\c\f\u\q\r\1\z\i\2\5\x\v\w\y\s\c\g\u\g\h\6\e\d\s\m\m\f\l\h\g\j\i\t\q\6\v\s\v\d\e\t\8\s\h\l\h\3\q\2\u\r\0\y\1\l\w\6\8\8\l\b\9\u\8\d\4\7\u\r\e\a\4\j\f\b\u\c\4\z\b\a\z\a\5\z\b\u\a\v\1\0\i\h\i\f\r\h\k\d\x\x\g\b\t\y\b\p\x\8\l\z\v\m\y\1\a\j\1\d\p\z\u\b\o\v\u\w\y\o\n\n\4\p\9\g\4\m\y\g\u\w\o\u\2\f\s\n\h\f\8\0\v\e\0\u\6\d\2\j\x\n\n\r\5\q\i\9\2\1\n\1\a\b\z\i\o\b\2\z\p\b\b\p\6\h\u\x\6\t\q\n\g\r\o\0\d\8\a\s\v\f\y\y\8\v\f\b\h\f\x\n\9\i\b\3\d\r\t\m\d\o\f\f\z\l\3\v\e\z\t\e\u\w\t\y\q\y\p\2\z\2\d\i\l\d\b\3\k\r\n\h\7\k\r\q\z\v\9\w\b\n\t\8\x\7\x ]] 00:09:51.800 00:09:51.800 real 0m16.963s 00:09:51.800 user 0m14.156s 00:09:51.800 sys 0m8.587s 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.800 ************************************ 00:09:51.800 END TEST dd_flags_misc 00:09:51.800 ************************************ 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:09:51.800 * Second test run, disabling liburing, forcing AIO 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:51.800 ************************************ 00:09:51.800 START TEST dd_flag_append_forced_aio 00:09:51.800 ************************************ 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=y5pg4cgne88o01frwb26b9b9467roz82 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=yomckywqfy20a1vk6jeog21c52kwa3kd 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s y5pg4cgne88o01frwb26b9b9467roz82 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s yomckywqfy20a1vk6jeog21c52kwa3kd 00:09:51.800 04:59:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:09:51.800 [2024-07-24 04:59:06.281581] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:51.800 [2024-07-24 04:59:06.281691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65233 ] 00:09:52.059 [2024-07-24 04:59:06.440420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.059 [2024-07-24 04:59:06.653927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.318 [2024-07-24 04:59:06.886105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:53.957  Copying: 32/32 [B] (average 31 kBps) 00:09:53.957 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ yomckywqfy20a1vk6jeog21c52kwa3kdy5pg4cgne88o01frwb26b9b9467roz82 == \y\o\m\c\k\y\w\q\f\y\2\0\a\1\v\k\6\j\e\o\g\2\1\c\5\2\k\w\a\3\k\d\y\5\p\g\4\c\g\n\e\8\8\o\0\1\f\r\w\b\2\6\b\9\b\9\4\6\7\r\o\z\8\2 ]] 00:09:53.957 00:09:53.957 real 0m2.095s 00:09:53.957 user 0m1.766s 00:09:53.957 sys 0m0.209s 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:53.957 ************************************ 00:09:53.957 END TEST dd_flag_append_forced_aio 00:09:53.957 ************************************ 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:53.957 ************************************ 00:09:53.957 START TEST dd_flag_directory_forced_aio 00:09:53.957 ************************************ 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:53.957 04:59:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:53.957 [2024-07-24 04:59:08.469401] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:53.957 [2024-07-24 04:59:08.469575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65277 ] 00:09:54.216 [2024-07-24 04:59:08.651010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.475 [2024-07-24 04:59:08.867507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.476 [2024-07-24 04:59:09.096279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:54.734 [2024-07-24 04:59:09.211932] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:54.734 [2024-07-24 04:59:09.211986] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:54.734 [2024-07-24 04:59:09.212034] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:55.672 [2024-07-24 04:59:10.028984] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:55.931 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:09:55.931 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:55.931 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:09:55.931 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:09:55.931 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:09:55.931 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:55.932 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:55.932 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:55.932 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:55.932 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.932 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:55.932 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.932 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:55.932 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.932 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:55.932 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.932 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:55.932 04:59:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:56.191 [2024-07-24 04:59:10.580099] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:56.191 [2024-07-24 04:59:10.580230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65304 ] 00:09:56.191 [2024-07-24 04:59:10.740100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.449 [2024-07-24 04:59:10.958144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.709 [2024-07-24 04:59:11.186598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:56.709 [2024-07-24 04:59:11.302827] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:56.709 [2024-07-24 04:59:11.302880] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:56.709 [2024-07-24 04:59:11.302927] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:57.646 [2024-07-24 04:59:12.125732] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:58.214 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:09:58.214 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:58.214 ************************************ 00:09:58.215 END TEST dd_flag_directory_forced_aio 00:09:58.215 ************************************ 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:58.215 00:09:58.215 real 0m4.219s 00:09:58.215 user 0m3.528s 00:09:58.215 sys 0m0.470s 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:58.215 ************************************ 00:09:58.215 START TEST dd_flag_nofollow_forced_aio 00:09:58.215 ************************************ 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:58.215 04:59:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:58.215 [2024-07-24 04:59:12.753443] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:09:58.215 [2024-07-24 04:59:12.753622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65350 ] 00:09:58.474 [2024-07-24 04:59:12.936146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.734 [2024-07-24 04:59:13.154792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.992 [2024-07-24 04:59:13.381955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:58.993 [2024-07-24 04:59:13.499417] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:58.993 [2024-07-24 04:59:13.499487] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:58.993 [2024-07-24 04:59:13.499522] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.928 [2024-07-24 04:59:14.323232] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:00.187 04:59:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:00.447 [2024-07-24 04:59:14.884444] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:00.447 [2024-07-24 04:59:14.884618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65376 ] 00:10:00.447 [2024-07-24 04:59:15.045410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.705 [2024-07-24 04:59:15.263057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.964 [2024-07-24 04:59:15.493869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:01.222 [2024-07-24 04:59:15.613027] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:01.223 [2024-07-24 04:59:15.613081] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:01.223 [2024-07-24 04:59:15.613116] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:02.159 [2024-07-24 04:59:16.430136] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:02.417 04:59:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:10:02.417 04:59:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:02.417 04:59:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:10:02.417 04:59:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:10:02.417 04:59:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:10:02.417 04:59:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:02.417 04:59:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:10:02.417 04:59:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:02.417 04:59:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:02.417 04:59:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:02.417 [2024-07-24 04:59:16.999351] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:02.417 [2024-07-24 04:59:16.999513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65397 ] 00:10:02.676 [2024-07-24 04:59:17.171557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.935 [2024-07-24 04:59:17.390062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.193 [2024-07-24 04:59:17.614827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:04.569  Copying: 512/512 [B] (average 500 kBps) 00:10:04.569 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ ztv5odwqrovdimvcxyrz2ynrmji0x3ympyexqi9paihzhenkhcwibt6uxwlu25rlms2vx77raqh8zxyf5y85cmj21dar735ad8dingdaht5y7v2usyumc5xvvb3ngcv7fmgls8zqpe8vrpf6qm3zo97k69s1770r25cwarym47mtk70t1543ky2oy9tv00c98wri001zg2tfur2l8uq5pgtqeeacghiz8a6f7c8jz1j487tkg36209x8mc6f9me56qfnn912u8tk0zoy460pzpktnnltzto70y44hsucuogbm0j2yfppeetn2tuwj6ea1vqg92olqksh45ynqab0dhxz40unfmis200slmp5sf8mg19ub22t1y3hcdycusv4sij6wsnktsnnr0ug672cmidms9y5g22a258pxxd6yvy188hn0m7pz5yjdm7ngkmqj546fun2xnwapm06fq90hr34o7ljxcsyon4zrnuyv2h62oni0kkm371fp800x6bc == \z\t\v\5\o\d\w\q\r\o\v\d\i\m\v\c\x\y\r\z\2\y\n\r\m\j\i\0\x\3\y\m\p\y\e\x\q\i\9\p\a\i\h\z\h\e\n\k\h\c\w\i\b\t\6\u\x\w\l\u\2\5\r\l\m\s\2\v\x\7\7\r\a\q\h\8\z\x\y\f\5\y\8\5\c\m\j\2\1\d\a\r\7\3\5\a\d\8\d\i\n\g\d\a\h\t\5\y\7\v\2\u\s\y\u\m\c\5\x\v\v\b\3\n\g\c\v\7\f\m\g\l\s\8\z\q\p\e\8\v\r\p\f\6\q\m\3\z\o\9\7\k\6\9\s\1\7\7\0\r\2\5\c\w\a\r\y\m\4\7\m\t\k\7\0\t\1\5\4\3\k\y\2\o\y\9\t\v\0\0\c\9\8\w\r\i\0\0\1\z\g\2\t\f\u\r\2\l\8\u\q\5\p\g\t\q\e\e\a\c\g\h\i\z\8\a\6\f\7\c\8\j\z\1\j\4\8\7\t\k\g\3\6\2\0\9\x\8\m\c\6\f\9\m\e\5\6\q\f\n\n\9\1\2\u\8\t\k\0\z\o\y\4\6\0\p\z\p\k\t\n\n\l\t\z\t\o\7\0\y\4\4\h\s\u\c\u\o\g\b\m\0\j\2\y\f\p\p\e\e\t\n\2\t\u\w\j\6\e\a\1\v\q\g\9\2\o\l\q\k\s\h\4\5\y\n\q\a\b\0\d\h\x\z\4\0\u\n\f\m\i\s\2\0\0\s\l\m\p\5\s\f\8\m\g\1\9\u\b\2\2\t\1\y\3\h\c\d\y\c\u\s\v\4\s\i\j\6\w\s\n\k\t\s\n\n\r\0\u\g\6\7\2\c\m\i\d\m\s\9\y\5\g\2\2\a\2\5\8\p\x\x\d\6\y\v\y\1\8\8\h\n\0\m\7\p\z\5\y\j\d\m\7\n\g\k\m\q\j\5\4\6\f\u\n\2\x\n\w\a\p\m\0\6\f\q\9\0\h\r\3\4\o\7\l\j\x\c\s\y\o\n\4\z\r\n\u\y\v\2\h\6\2\o\n\i\0\k\k\m\3\7\1\f\p\8\0\0\x\6\b\c ]] 00:10:04.569 00:10:04.569 real 0m6.389s 00:10:04.569 user 0m5.297s 00:10:04.569 sys 0m0.745s 00:10:04.569 ************************************ 00:10:04.569 END TEST dd_flag_nofollow_forced_aio 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:04.569 ************************************ 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:04.569 ************************************ 00:10:04.569 START TEST dd_flag_noatime_forced_aio 00:10:04.569 ************************************ 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721797157 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721797159 00:10:04.569 04:59:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:10:05.504 04:59:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:05.762 [2024-07-24 04:59:20.223190] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:05.762 [2024-07-24 04:59:20.223347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65455 ] 00:10:06.021 [2024-07-24 04:59:20.409815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.279 [2024-07-24 04:59:20.675396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.279 [2024-07-24 04:59:20.909986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:07.910  Copying: 512/512 [B] (average 500 kBps) 00:10:07.910 00:10:07.910 04:59:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:07.910 04:59:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721797157 )) 00:10:07.910 04:59:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:07.910 04:59:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721797159 )) 00:10:07.910 04:59:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:07.910 [2024-07-24 04:59:22.391767] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:07.910 [2024-07-24 04:59:22.391891] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65487 ] 00:10:08.169 [2024-07-24 04:59:22.552117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.169 [2024-07-24 04:59:22.767575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.428 [2024-07-24 04:59:22.990931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:10.059  Copying: 512/512 [B] (average 500 kBps) 00:10:10.059 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721797163 )) 00:10:10.059 00:10:10.059 real 0m5.308s 00:10:10.059 user 0m3.562s 00:10:10.059 sys 0m0.502s 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:10.059 ************************************ 00:10:10.059 END TEST dd_flag_noatime_forced_aio 00:10:10.059 ************************************ 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:10.059 ************************************ 00:10:10.059 START TEST dd_flags_misc_forced_aio 00:10:10.059 ************************************ 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:10.059 04:59:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:10.059 [2024-07-24 04:59:24.532407] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:10.059 [2024-07-24 04:59:24.532587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65534 ] 00:10:10.317 [2024-07-24 04:59:24.691497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.317 [2024-07-24 04:59:24.909043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.575 [2024-07-24 04:59:25.134011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:12.211  Copying: 512/512 [B] (average 500 kBps) 00:10:12.211 00:10:12.211 04:59:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lssjfte5wqyz9okzme7dgrb6kzdforscrnme9d8dxf038c456s7dl0e96hbs6wa7svpx2eu2rt1k0afixti47nwzmh2u12wetix8aj8yes4lewnvcfmgwmyj9iwvhkul9t8mexmnw7zcxvszulrd5auqf91qkez6b0irbyd61ivhj0cg0xv3y447n0enhwivcdksi1rcap1mom0lxnu1iu3fh0i1ghf6w1znyb409khb2m2vvbfyu0oo7hfhi0aprchsdf99hwml43egirz42v5kqopiljz6agurs7mr7vi607vicwuoh945zwlmczglf7hid5bqf65is161gpd82ls43347txgkl9osbsbpiqub00x5df10xfjd5maz36e91j9ywcuzyukjnoijky4a2f36mx8jdflsuqj9o4gd4lai4u6q1dfx2xsfmckytgf80bny7gg755gyw98lr6htpa8z33v0hllsgo6lbkiy1lh0s8wypdmn1rvjmsh5im0w == \l\s\s\j\f\t\e\5\w\q\y\z\9\o\k\z\m\e\7\d\g\r\b\6\k\z\d\f\o\r\s\c\r\n\m\e\9\d\8\d\x\f\0\3\8\c\4\5\6\s\7\d\l\0\e\9\6\h\b\s\6\w\a\7\s\v\p\x\2\e\u\2\r\t\1\k\0\a\f\i\x\t\i\4\7\n\w\z\m\h\2\u\1\2\w\e\t\i\x\8\a\j\8\y\e\s\4\l\e\w\n\v\c\f\m\g\w\m\y\j\9\i\w\v\h\k\u\l\9\t\8\m\e\x\m\n\w\7\z\c\x\v\s\z\u\l\r\d\5\a\u\q\f\9\1\q\k\e\z\6\b\0\i\r\b\y\d\6\1\i\v\h\j\0\c\g\0\x\v\3\y\4\4\7\n\0\e\n\h\w\i\v\c\d\k\s\i\1\r\c\a\p\1\m\o\m\0\l\x\n\u\1\i\u\3\f\h\0\i\1\g\h\f\6\w\1\z\n\y\b\4\0\9\k\h\b\2\m\2\v\v\b\f\y\u\0\o\o\7\h\f\h\i\0\a\p\r\c\h\s\d\f\9\9\h\w\m\l\4\3\e\g\i\r\z\4\2\v\5\k\q\o\p\i\l\j\z\6\a\g\u\r\s\7\m\r\7\v\i\6\0\7\v\i\c\w\u\o\h\9\4\5\z\w\l\m\c\z\g\l\f\7\h\i\d\5\b\q\f\6\5\i\s\1\6\1\g\p\d\8\2\l\s\4\3\3\4\7\t\x\g\k\l\9\o\s\b\s\b\p\i\q\u\b\0\0\x\5\d\f\1\0\x\f\j\d\5\m\a\z\3\6\e\9\1\j\9\y\w\c\u\z\y\u\k\j\n\o\i\j\k\y\4\a\2\f\3\6\m\x\8\j\d\f\l\s\u\q\j\9\o\4\g\d\4\l\a\i\4\u\6\q\1\d\f\x\2\x\s\f\m\c\k\y\t\g\f\8\0\b\n\y\7\g\g\7\5\5\g\y\w\9\8\l\r\6\h\t\p\a\8\z\3\3\v\0\h\l\l\s\g\o\6\l\b\k\i\y\1\l\h\0\s\8\w\y\p\d\m\n\1\r\v\j\m\s\h\5\i\m\0\w ]] 00:10:12.211 04:59:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:12.211 04:59:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:12.211 [2024-07-24 04:59:26.647958] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:12.211 [2024-07-24 04:59:26.648120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65559 ] 00:10:12.211 [2024-07-24 04:59:26.829649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.471 [2024-07-24 04:59:27.048966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.730 [2024-07-24 04:59:27.280933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:14.368  Copying: 512/512 [B] (average 500 kBps) 00:10:14.368 00:10:14.368 04:59:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lssjfte5wqyz9okzme7dgrb6kzdforscrnme9d8dxf038c456s7dl0e96hbs6wa7svpx2eu2rt1k0afixti47nwzmh2u12wetix8aj8yes4lewnvcfmgwmyj9iwvhkul9t8mexmnw7zcxvszulrd5auqf91qkez6b0irbyd61ivhj0cg0xv3y447n0enhwivcdksi1rcap1mom0lxnu1iu3fh0i1ghf6w1znyb409khb2m2vvbfyu0oo7hfhi0aprchsdf99hwml43egirz42v5kqopiljz6agurs7mr7vi607vicwuoh945zwlmczglf7hid5bqf65is161gpd82ls43347txgkl9osbsbpiqub00x5df10xfjd5maz36e91j9ywcuzyukjnoijky4a2f36mx8jdflsuqj9o4gd4lai4u6q1dfx2xsfmckytgf80bny7gg755gyw98lr6htpa8z33v0hllsgo6lbkiy1lh0s8wypdmn1rvjmsh5im0w == \l\s\s\j\f\t\e\5\w\q\y\z\9\o\k\z\m\e\7\d\g\r\b\6\k\z\d\f\o\r\s\c\r\n\m\e\9\d\8\d\x\f\0\3\8\c\4\5\6\s\7\d\l\0\e\9\6\h\b\s\6\w\a\7\s\v\p\x\2\e\u\2\r\t\1\k\0\a\f\i\x\t\i\4\7\n\w\z\m\h\2\u\1\2\w\e\t\i\x\8\a\j\8\y\e\s\4\l\e\w\n\v\c\f\m\g\w\m\y\j\9\i\w\v\h\k\u\l\9\t\8\m\e\x\m\n\w\7\z\c\x\v\s\z\u\l\r\d\5\a\u\q\f\9\1\q\k\e\z\6\b\0\i\r\b\y\d\6\1\i\v\h\j\0\c\g\0\x\v\3\y\4\4\7\n\0\e\n\h\w\i\v\c\d\k\s\i\1\r\c\a\p\1\m\o\m\0\l\x\n\u\1\i\u\3\f\h\0\i\1\g\h\f\6\w\1\z\n\y\b\4\0\9\k\h\b\2\m\2\v\v\b\f\y\u\0\o\o\7\h\f\h\i\0\a\p\r\c\h\s\d\f\9\9\h\w\m\l\4\3\e\g\i\r\z\4\2\v\5\k\q\o\p\i\l\j\z\6\a\g\u\r\s\7\m\r\7\v\i\6\0\7\v\i\c\w\u\o\h\9\4\5\z\w\l\m\c\z\g\l\f\7\h\i\d\5\b\q\f\6\5\i\s\1\6\1\g\p\d\8\2\l\s\4\3\3\4\7\t\x\g\k\l\9\o\s\b\s\b\p\i\q\u\b\0\0\x\5\d\f\1\0\x\f\j\d\5\m\a\z\3\6\e\9\1\j\9\y\w\c\u\z\y\u\k\j\n\o\i\j\k\y\4\a\2\f\3\6\m\x\8\j\d\f\l\s\u\q\j\9\o\4\g\d\4\l\a\i\4\u\6\q\1\d\f\x\2\x\s\f\m\c\k\y\t\g\f\8\0\b\n\y\7\g\g\7\5\5\g\y\w\9\8\l\r\6\h\t\p\a\8\z\3\3\v\0\h\l\l\s\g\o\6\l\b\k\i\y\1\l\h\0\s\8\w\y\p\d\m\n\1\r\v\j\m\s\h\5\i\m\0\w ]] 00:10:14.368 04:59:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:14.368 04:59:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:14.368 [2024-07-24 04:59:28.782641] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:14.368 [2024-07-24 04:59:28.782807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65584 ] 00:10:14.368 [2024-07-24 04:59:28.963805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.627 [2024-07-24 04:59:29.185343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.886 [2024-07-24 04:59:29.418127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:16.570  Copying: 512/512 [B] (average 166 kBps) 00:10:16.570 00:10:16.570 04:59:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lssjfte5wqyz9okzme7dgrb6kzdforscrnme9d8dxf038c456s7dl0e96hbs6wa7svpx2eu2rt1k0afixti47nwzmh2u12wetix8aj8yes4lewnvcfmgwmyj9iwvhkul9t8mexmnw7zcxvszulrd5auqf91qkez6b0irbyd61ivhj0cg0xv3y447n0enhwivcdksi1rcap1mom0lxnu1iu3fh0i1ghf6w1znyb409khb2m2vvbfyu0oo7hfhi0aprchsdf99hwml43egirz42v5kqopiljz6agurs7mr7vi607vicwuoh945zwlmczglf7hid5bqf65is161gpd82ls43347txgkl9osbsbpiqub00x5df10xfjd5maz36e91j9ywcuzyukjnoijky4a2f36mx8jdflsuqj9o4gd4lai4u6q1dfx2xsfmckytgf80bny7gg755gyw98lr6htpa8z33v0hllsgo6lbkiy1lh0s8wypdmn1rvjmsh5im0w == \l\s\s\j\f\t\e\5\w\q\y\z\9\o\k\z\m\e\7\d\g\r\b\6\k\z\d\f\o\r\s\c\r\n\m\e\9\d\8\d\x\f\0\3\8\c\4\5\6\s\7\d\l\0\e\9\6\h\b\s\6\w\a\7\s\v\p\x\2\e\u\2\r\t\1\k\0\a\f\i\x\t\i\4\7\n\w\z\m\h\2\u\1\2\w\e\t\i\x\8\a\j\8\y\e\s\4\l\e\w\n\v\c\f\m\g\w\m\y\j\9\i\w\v\h\k\u\l\9\t\8\m\e\x\m\n\w\7\z\c\x\v\s\z\u\l\r\d\5\a\u\q\f\9\1\q\k\e\z\6\b\0\i\r\b\y\d\6\1\i\v\h\j\0\c\g\0\x\v\3\y\4\4\7\n\0\e\n\h\w\i\v\c\d\k\s\i\1\r\c\a\p\1\m\o\m\0\l\x\n\u\1\i\u\3\f\h\0\i\1\g\h\f\6\w\1\z\n\y\b\4\0\9\k\h\b\2\m\2\v\v\b\f\y\u\0\o\o\7\h\f\h\i\0\a\p\r\c\h\s\d\f\9\9\h\w\m\l\4\3\e\g\i\r\z\4\2\v\5\k\q\o\p\i\l\j\z\6\a\g\u\r\s\7\m\r\7\v\i\6\0\7\v\i\c\w\u\o\h\9\4\5\z\w\l\m\c\z\g\l\f\7\h\i\d\5\b\q\f\6\5\i\s\1\6\1\g\p\d\8\2\l\s\4\3\3\4\7\t\x\g\k\l\9\o\s\b\s\b\p\i\q\u\b\0\0\x\5\d\f\1\0\x\f\j\d\5\m\a\z\3\6\e\9\1\j\9\y\w\c\u\z\y\u\k\j\n\o\i\j\k\y\4\a\2\f\3\6\m\x\8\j\d\f\l\s\u\q\j\9\o\4\g\d\4\l\a\i\4\u\6\q\1\d\f\x\2\x\s\f\m\c\k\y\t\g\f\8\0\b\n\y\7\g\g\7\5\5\g\y\w\9\8\l\r\6\h\t\p\a\8\z\3\3\v\0\h\l\l\s\g\o\6\l\b\k\i\y\1\l\h\0\s\8\w\y\p\d\m\n\1\r\v\j\m\s\h\5\i\m\0\w ]] 00:10:16.570 04:59:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:16.570 04:59:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:16.570 [2024-07-24 04:59:30.928998] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:16.570 [2024-07-24 04:59:30.929164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65609 ] 00:10:16.570 [2024-07-24 04:59:31.110761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.829 [2024-07-24 04:59:31.333291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.089 [2024-07-24 04:59:31.565720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:18.467  Copying: 512/512 [B] (average 500 kBps) 00:10:18.467 00:10:18.467 04:59:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lssjfte5wqyz9okzme7dgrb6kzdforscrnme9d8dxf038c456s7dl0e96hbs6wa7svpx2eu2rt1k0afixti47nwzmh2u12wetix8aj8yes4lewnvcfmgwmyj9iwvhkul9t8mexmnw7zcxvszulrd5auqf91qkez6b0irbyd61ivhj0cg0xv3y447n0enhwivcdksi1rcap1mom0lxnu1iu3fh0i1ghf6w1znyb409khb2m2vvbfyu0oo7hfhi0aprchsdf99hwml43egirz42v5kqopiljz6agurs7mr7vi607vicwuoh945zwlmczglf7hid5bqf65is161gpd82ls43347txgkl9osbsbpiqub00x5df10xfjd5maz36e91j9ywcuzyukjnoijky4a2f36mx8jdflsuqj9o4gd4lai4u6q1dfx2xsfmckytgf80bny7gg755gyw98lr6htpa8z33v0hllsgo6lbkiy1lh0s8wypdmn1rvjmsh5im0w == \l\s\s\j\f\t\e\5\w\q\y\z\9\o\k\z\m\e\7\d\g\r\b\6\k\z\d\f\o\r\s\c\r\n\m\e\9\d\8\d\x\f\0\3\8\c\4\5\6\s\7\d\l\0\e\9\6\h\b\s\6\w\a\7\s\v\p\x\2\e\u\2\r\t\1\k\0\a\f\i\x\t\i\4\7\n\w\z\m\h\2\u\1\2\w\e\t\i\x\8\a\j\8\y\e\s\4\l\e\w\n\v\c\f\m\g\w\m\y\j\9\i\w\v\h\k\u\l\9\t\8\m\e\x\m\n\w\7\z\c\x\v\s\z\u\l\r\d\5\a\u\q\f\9\1\q\k\e\z\6\b\0\i\r\b\y\d\6\1\i\v\h\j\0\c\g\0\x\v\3\y\4\4\7\n\0\e\n\h\w\i\v\c\d\k\s\i\1\r\c\a\p\1\m\o\m\0\l\x\n\u\1\i\u\3\f\h\0\i\1\g\h\f\6\w\1\z\n\y\b\4\0\9\k\h\b\2\m\2\v\v\b\f\y\u\0\o\o\7\h\f\h\i\0\a\p\r\c\h\s\d\f\9\9\h\w\m\l\4\3\e\g\i\r\z\4\2\v\5\k\q\o\p\i\l\j\z\6\a\g\u\r\s\7\m\r\7\v\i\6\0\7\v\i\c\w\u\o\h\9\4\5\z\w\l\m\c\z\g\l\f\7\h\i\d\5\b\q\f\6\5\i\s\1\6\1\g\p\d\8\2\l\s\4\3\3\4\7\t\x\g\k\l\9\o\s\b\s\b\p\i\q\u\b\0\0\x\5\d\f\1\0\x\f\j\d\5\m\a\z\3\6\e\9\1\j\9\y\w\c\u\z\y\u\k\j\n\o\i\j\k\y\4\a\2\f\3\6\m\x\8\j\d\f\l\s\u\q\j\9\o\4\g\d\4\l\a\i\4\u\6\q\1\d\f\x\2\x\s\f\m\c\k\y\t\g\f\8\0\b\n\y\7\g\g\7\5\5\g\y\w\9\8\l\r\6\h\t\p\a\8\z\3\3\v\0\h\l\l\s\g\o\6\l\b\k\i\y\1\l\h\0\s\8\w\y\p\d\m\n\1\r\v\j\m\s\h\5\i\m\0\w ]] 00:10:18.467 04:59:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:18.467 04:59:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:10:18.467 04:59:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:18.467 04:59:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:18.468 04:59:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:18.468 04:59:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:18.468 [2024-07-24 04:59:33.094234] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:18.468 [2024-07-24 04:59:33.094405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65634 ] 00:10:18.727 [2024-07-24 04:59:33.276160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.985 [2024-07-24 04:59:33.494591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.243 [2024-07-24 04:59:33.723677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:20.620  Copying: 512/512 [B] (average 500 kBps) 00:10:20.620 00:10:20.621 04:59:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ padczkcp0dgb7uterr8woufqm2o854mjges3yynysyst8uhwr5qzm3s2jjm56axrzshesgwj5inu8lzmj939t91t7ry0tey91jh5etj7wn27zxmec97lw7oxv9p11bzsbn6tazgkg1ixltho8hdf6ek0um5njzyquk10sjg5wnbbbdabrkmxa4mu24g3i0rm5uffsvi8nu9gymq0trxtcdgw2mvjm842lj1yg0for0dimwhqt506z90koh0nb6mhjsulun2aquud6a3ssjgqtksp0xc77yeb4p6wr3zn234z6zy6yy4wjpayn3z5txlpaugqy2di6iw74bi65ntkjeo7hrfvueqk962jflpcnw63lxa6munbn3p09uwnbvn33k8mbhf0t94ks5vjso44zcz3s169g6ausp4njnhott7cb20pxl463ei3ss4m1qxeiloomafy3njj43r7bopqf4kn5hso37lw4ckvcozbgrohoxu248a8gbdacotaxq3d == \p\a\d\c\z\k\c\p\0\d\g\b\7\u\t\e\r\r\8\w\o\u\f\q\m\2\o\8\5\4\m\j\g\e\s\3\y\y\n\y\s\y\s\t\8\u\h\w\r\5\q\z\m\3\s\2\j\j\m\5\6\a\x\r\z\s\h\e\s\g\w\j\5\i\n\u\8\l\z\m\j\9\3\9\t\9\1\t\7\r\y\0\t\e\y\9\1\j\h\5\e\t\j\7\w\n\2\7\z\x\m\e\c\9\7\l\w\7\o\x\v\9\p\1\1\b\z\s\b\n\6\t\a\z\g\k\g\1\i\x\l\t\h\o\8\h\d\f\6\e\k\0\u\m\5\n\j\z\y\q\u\k\1\0\s\j\g\5\w\n\b\b\b\d\a\b\r\k\m\x\a\4\m\u\2\4\g\3\i\0\r\m\5\u\f\f\s\v\i\8\n\u\9\g\y\m\q\0\t\r\x\t\c\d\g\w\2\m\v\j\m\8\4\2\l\j\1\y\g\0\f\o\r\0\d\i\m\w\h\q\t\5\0\6\z\9\0\k\o\h\0\n\b\6\m\h\j\s\u\l\u\n\2\a\q\u\u\d\6\a\3\s\s\j\g\q\t\k\s\p\0\x\c\7\7\y\e\b\4\p\6\w\r\3\z\n\2\3\4\z\6\z\y\6\y\y\4\w\j\p\a\y\n\3\z\5\t\x\l\p\a\u\g\q\y\2\d\i\6\i\w\7\4\b\i\6\5\n\t\k\j\e\o\7\h\r\f\v\u\e\q\k\9\6\2\j\f\l\p\c\n\w\6\3\l\x\a\6\m\u\n\b\n\3\p\0\9\u\w\n\b\v\n\3\3\k\8\m\b\h\f\0\t\9\4\k\s\5\v\j\s\o\4\4\z\c\z\3\s\1\6\9\g\6\a\u\s\p\4\n\j\n\h\o\t\t\7\c\b\2\0\p\x\l\4\6\3\e\i\3\s\s\4\m\1\q\x\e\i\l\o\o\m\a\f\y\3\n\j\j\4\3\r\7\b\o\p\q\f\4\k\n\5\h\s\o\3\7\l\w\4\c\k\v\c\o\z\b\g\r\o\h\o\x\u\2\4\8\a\8\g\b\d\a\c\o\t\a\x\q\3\d ]] 00:10:20.621 04:59:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:20.621 04:59:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:20.621 [2024-07-24 04:59:35.249957] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:20.621 [2024-07-24 04:59:35.250124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65659 ] 00:10:20.880 [2024-07-24 04:59:35.427870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.139 [2024-07-24 04:59:35.642303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.398 [2024-07-24 04:59:35.876613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:22.778  Copying: 512/512 [B] (average 500 kBps) 00:10:22.778 00:10:22.778 04:59:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ padczkcp0dgb7uterr8woufqm2o854mjges3yynysyst8uhwr5qzm3s2jjm56axrzshesgwj5inu8lzmj939t91t7ry0tey91jh5etj7wn27zxmec97lw7oxv9p11bzsbn6tazgkg1ixltho8hdf6ek0um5njzyquk10sjg5wnbbbdabrkmxa4mu24g3i0rm5uffsvi8nu9gymq0trxtcdgw2mvjm842lj1yg0for0dimwhqt506z90koh0nb6mhjsulun2aquud6a3ssjgqtksp0xc77yeb4p6wr3zn234z6zy6yy4wjpayn3z5txlpaugqy2di6iw74bi65ntkjeo7hrfvueqk962jflpcnw63lxa6munbn3p09uwnbvn33k8mbhf0t94ks5vjso44zcz3s169g6ausp4njnhott7cb20pxl463ei3ss4m1qxeiloomafy3njj43r7bopqf4kn5hso37lw4ckvcozbgrohoxu248a8gbdacotaxq3d == \p\a\d\c\z\k\c\p\0\d\g\b\7\u\t\e\r\r\8\w\o\u\f\q\m\2\o\8\5\4\m\j\g\e\s\3\y\y\n\y\s\y\s\t\8\u\h\w\r\5\q\z\m\3\s\2\j\j\m\5\6\a\x\r\z\s\h\e\s\g\w\j\5\i\n\u\8\l\z\m\j\9\3\9\t\9\1\t\7\r\y\0\t\e\y\9\1\j\h\5\e\t\j\7\w\n\2\7\z\x\m\e\c\9\7\l\w\7\o\x\v\9\p\1\1\b\z\s\b\n\6\t\a\z\g\k\g\1\i\x\l\t\h\o\8\h\d\f\6\e\k\0\u\m\5\n\j\z\y\q\u\k\1\0\s\j\g\5\w\n\b\b\b\d\a\b\r\k\m\x\a\4\m\u\2\4\g\3\i\0\r\m\5\u\f\f\s\v\i\8\n\u\9\g\y\m\q\0\t\r\x\t\c\d\g\w\2\m\v\j\m\8\4\2\l\j\1\y\g\0\f\o\r\0\d\i\m\w\h\q\t\5\0\6\z\9\0\k\o\h\0\n\b\6\m\h\j\s\u\l\u\n\2\a\q\u\u\d\6\a\3\s\s\j\g\q\t\k\s\p\0\x\c\7\7\y\e\b\4\p\6\w\r\3\z\n\2\3\4\z\6\z\y\6\y\y\4\w\j\p\a\y\n\3\z\5\t\x\l\p\a\u\g\q\y\2\d\i\6\i\w\7\4\b\i\6\5\n\t\k\j\e\o\7\h\r\f\v\u\e\q\k\9\6\2\j\f\l\p\c\n\w\6\3\l\x\a\6\m\u\n\b\n\3\p\0\9\u\w\n\b\v\n\3\3\k\8\m\b\h\f\0\t\9\4\k\s\5\v\j\s\o\4\4\z\c\z\3\s\1\6\9\g\6\a\u\s\p\4\n\j\n\h\o\t\t\7\c\b\2\0\p\x\l\4\6\3\e\i\3\s\s\4\m\1\q\x\e\i\l\o\o\m\a\f\y\3\n\j\j\4\3\r\7\b\o\p\q\f\4\k\n\5\h\s\o\3\7\l\w\4\c\k\v\c\o\z\b\g\r\o\h\o\x\u\2\4\8\a\8\g\b\d\a\c\o\t\a\x\q\3\d ]] 00:10:22.778 04:59:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:22.778 04:59:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:22.778 [2024-07-24 04:59:37.386915] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:22.778 [2024-07-24 04:59:37.387082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65684 ] 00:10:23.037 [2024-07-24 04:59:37.567116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.296 [2024-07-24 04:59:37.785526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.555 [2024-07-24 04:59:38.020822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:24.933  Copying: 512/512 [B] (average 250 kBps) 00:10:24.933 00:10:24.933 04:59:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ padczkcp0dgb7uterr8woufqm2o854mjges3yynysyst8uhwr5qzm3s2jjm56axrzshesgwj5inu8lzmj939t91t7ry0tey91jh5etj7wn27zxmec97lw7oxv9p11bzsbn6tazgkg1ixltho8hdf6ek0um5njzyquk10sjg5wnbbbdabrkmxa4mu24g3i0rm5uffsvi8nu9gymq0trxtcdgw2mvjm842lj1yg0for0dimwhqt506z90koh0nb6mhjsulun2aquud6a3ssjgqtksp0xc77yeb4p6wr3zn234z6zy6yy4wjpayn3z5txlpaugqy2di6iw74bi65ntkjeo7hrfvueqk962jflpcnw63lxa6munbn3p09uwnbvn33k8mbhf0t94ks5vjso44zcz3s169g6ausp4njnhott7cb20pxl463ei3ss4m1qxeiloomafy3njj43r7bopqf4kn5hso37lw4ckvcozbgrohoxu248a8gbdacotaxq3d == \p\a\d\c\z\k\c\p\0\d\g\b\7\u\t\e\r\r\8\w\o\u\f\q\m\2\o\8\5\4\m\j\g\e\s\3\y\y\n\y\s\y\s\t\8\u\h\w\r\5\q\z\m\3\s\2\j\j\m\5\6\a\x\r\z\s\h\e\s\g\w\j\5\i\n\u\8\l\z\m\j\9\3\9\t\9\1\t\7\r\y\0\t\e\y\9\1\j\h\5\e\t\j\7\w\n\2\7\z\x\m\e\c\9\7\l\w\7\o\x\v\9\p\1\1\b\z\s\b\n\6\t\a\z\g\k\g\1\i\x\l\t\h\o\8\h\d\f\6\e\k\0\u\m\5\n\j\z\y\q\u\k\1\0\s\j\g\5\w\n\b\b\b\d\a\b\r\k\m\x\a\4\m\u\2\4\g\3\i\0\r\m\5\u\f\f\s\v\i\8\n\u\9\g\y\m\q\0\t\r\x\t\c\d\g\w\2\m\v\j\m\8\4\2\l\j\1\y\g\0\f\o\r\0\d\i\m\w\h\q\t\5\0\6\z\9\0\k\o\h\0\n\b\6\m\h\j\s\u\l\u\n\2\a\q\u\u\d\6\a\3\s\s\j\g\q\t\k\s\p\0\x\c\7\7\y\e\b\4\p\6\w\r\3\z\n\2\3\4\z\6\z\y\6\y\y\4\w\j\p\a\y\n\3\z\5\t\x\l\p\a\u\g\q\y\2\d\i\6\i\w\7\4\b\i\6\5\n\t\k\j\e\o\7\h\r\f\v\u\e\q\k\9\6\2\j\f\l\p\c\n\w\6\3\l\x\a\6\m\u\n\b\n\3\p\0\9\u\w\n\b\v\n\3\3\k\8\m\b\h\f\0\t\9\4\k\s\5\v\j\s\o\4\4\z\c\z\3\s\1\6\9\g\6\a\u\s\p\4\n\j\n\h\o\t\t\7\c\b\2\0\p\x\l\4\6\3\e\i\3\s\s\4\m\1\q\x\e\i\l\o\o\m\a\f\y\3\n\j\j\4\3\r\7\b\o\p\q\f\4\k\n\5\h\s\o\3\7\l\w\4\c\k\v\c\o\z\b\g\r\o\h\o\x\u\2\4\8\a\8\g\b\d\a\c\o\t\a\x\q\3\d ]] 00:10:24.933 04:59:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:24.933 04:59:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:24.933 [2024-07-24 04:59:39.526859] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:24.933 [2024-07-24 04:59:39.527024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65709 ] 00:10:25.192 [2024-07-24 04:59:39.699058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.451 [2024-07-24 04:59:39.914696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.710 [2024-07-24 04:59:40.153346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:27.088  Copying: 512/512 [B] (average 250 kBps) 00:10:27.088 00:10:27.088 04:59:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ padczkcp0dgb7uterr8woufqm2o854mjges3yynysyst8uhwr5qzm3s2jjm56axrzshesgwj5inu8lzmj939t91t7ry0tey91jh5etj7wn27zxmec97lw7oxv9p11bzsbn6tazgkg1ixltho8hdf6ek0um5njzyquk10sjg5wnbbbdabrkmxa4mu24g3i0rm5uffsvi8nu9gymq0trxtcdgw2mvjm842lj1yg0for0dimwhqt506z90koh0nb6mhjsulun2aquud6a3ssjgqtksp0xc77yeb4p6wr3zn234z6zy6yy4wjpayn3z5txlpaugqy2di6iw74bi65ntkjeo7hrfvueqk962jflpcnw63lxa6munbn3p09uwnbvn33k8mbhf0t94ks5vjso44zcz3s169g6ausp4njnhott7cb20pxl463ei3ss4m1qxeiloomafy3njj43r7bopqf4kn5hso37lw4ckvcozbgrohoxu248a8gbdacotaxq3d == \p\a\d\c\z\k\c\p\0\d\g\b\7\u\t\e\r\r\8\w\o\u\f\q\m\2\o\8\5\4\m\j\g\e\s\3\y\y\n\y\s\y\s\t\8\u\h\w\r\5\q\z\m\3\s\2\j\j\m\5\6\a\x\r\z\s\h\e\s\g\w\j\5\i\n\u\8\l\z\m\j\9\3\9\t\9\1\t\7\r\y\0\t\e\y\9\1\j\h\5\e\t\j\7\w\n\2\7\z\x\m\e\c\9\7\l\w\7\o\x\v\9\p\1\1\b\z\s\b\n\6\t\a\z\g\k\g\1\i\x\l\t\h\o\8\h\d\f\6\e\k\0\u\m\5\n\j\z\y\q\u\k\1\0\s\j\g\5\w\n\b\b\b\d\a\b\r\k\m\x\a\4\m\u\2\4\g\3\i\0\r\m\5\u\f\f\s\v\i\8\n\u\9\g\y\m\q\0\t\r\x\t\c\d\g\w\2\m\v\j\m\8\4\2\l\j\1\y\g\0\f\o\r\0\d\i\m\w\h\q\t\5\0\6\z\9\0\k\o\h\0\n\b\6\m\h\j\s\u\l\u\n\2\a\q\u\u\d\6\a\3\s\s\j\g\q\t\k\s\p\0\x\c\7\7\y\e\b\4\p\6\w\r\3\z\n\2\3\4\z\6\z\y\6\y\y\4\w\j\p\a\y\n\3\z\5\t\x\l\p\a\u\g\q\y\2\d\i\6\i\w\7\4\b\i\6\5\n\t\k\j\e\o\7\h\r\f\v\u\e\q\k\9\6\2\j\f\l\p\c\n\w\6\3\l\x\a\6\m\u\n\b\n\3\p\0\9\u\w\n\b\v\n\3\3\k\8\m\b\h\f\0\t\9\4\k\s\5\v\j\s\o\4\4\z\c\z\3\s\1\6\9\g\6\a\u\s\p\4\n\j\n\h\o\t\t\7\c\b\2\0\p\x\l\4\6\3\e\i\3\s\s\4\m\1\q\x\e\i\l\o\o\m\a\f\y\3\n\j\j\4\3\r\7\b\o\p\q\f\4\k\n\5\h\s\o\3\7\l\w\4\c\k\v\c\o\z\b\g\r\o\h\o\x\u\2\4\8\a\8\g\b\d\a\c\o\t\a\x\q\3\d ]] 00:10:27.088 00:10:27.088 real 0m17.103s 00:10:27.088 user 0m14.160s 00:10:27.088 sys 0m1.938s 00:10:27.088 04:59:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:27.088 04:59:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:27.088 ************************************ 00:10:27.088 END TEST dd_flags_misc_forced_aio 00:10:27.088 ************************************ 00:10:27.088 04:59:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:10:27.088 04:59:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:27.089 04:59:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:27.089 00:10:27.089 real 1m10.895s 00:10:27.089 user 0m56.880s 00:10:27.089 sys 0m18.260s 00:10:27.089 04:59:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:27.089 04:59:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:27.089 ************************************ 00:10:27.089 END TEST spdk_dd_posix 00:10:27.089 ************************************ 00:10:27.089 04:59:41 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:10:27.089 04:59:41 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:27.089 04:59:41 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.089 04:59:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:27.089 ************************************ 00:10:27.089 START TEST spdk_dd_malloc 00:10:27.089 ************************************ 00:10:27.089 04:59:41 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:10:27.348 * Looking for test storage... 00:10:27.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:10:27.348 ************************************ 00:10:27.348 START TEST dd_malloc_copy 00:10:27.348 ************************************ 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:27.348 04:59:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:10:27.348 { 00:10:27.348 "subsystems": [ 00:10:27.348 { 00:10:27.348 "subsystem": "bdev", 00:10:27.348 "config": [ 00:10:27.348 { 00:10:27.348 "params": { 00:10:27.348 "block_size": 512, 00:10:27.348 "num_blocks": 1048576, 00:10:27.348 "name": "malloc0" 00:10:27.348 }, 00:10:27.348 "method": "bdev_malloc_create" 00:10:27.348 }, 00:10:27.348 { 00:10:27.348 "params": { 00:10:27.348 "block_size": 512, 00:10:27.348 "num_blocks": 1048576, 00:10:27.348 "name": "malloc1" 00:10:27.348 }, 00:10:27.348 "method": "bdev_malloc_create" 00:10:27.348 }, 00:10:27.348 { 00:10:27.348 "method": "bdev_wait_for_examine" 00:10:27.348 } 00:10:27.348 ] 00:10:27.348 } 00:10:27.348 ] 00:10:27.348 } 00:10:27.348 [2024-07-24 04:59:41.900904] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:27.348 [2024-07-24 04:59:41.901063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65795 ] 00:10:27.607 [2024-07-24 04:59:42.082948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.865 [2024-07-24 04:59:42.301991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.124 [2024-07-24 04:59:42.534827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:35.729  Copying: 231/512 [MB] (231 MBps) Copying: 465/512 [MB] (234 MBps) Copying: 512/512 [MB] (average 232 MBps) 00:10:35.729 00:10:35.729 04:59:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:10:35.729 04:59:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:10:35.729 04:59:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:35.729 04:59:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:10:35.729 { 00:10:35.729 "subsystems": [ 00:10:35.729 { 00:10:35.729 "subsystem": "bdev", 00:10:35.729 "config": [ 00:10:35.729 { 00:10:35.729 "params": { 00:10:35.729 "block_size": 512, 00:10:35.729 "num_blocks": 1048576, 00:10:35.729 "name": "malloc0" 00:10:35.729 }, 00:10:35.729 "method": "bdev_malloc_create" 00:10:35.729 }, 00:10:35.729 { 00:10:35.729 "params": { 00:10:35.729 "block_size": 512, 00:10:35.729 "num_blocks": 1048576, 00:10:35.729 "name": "malloc1" 00:10:35.729 }, 00:10:35.729 "method": "bdev_malloc_create" 00:10:35.729 }, 00:10:35.729 { 00:10:35.729 "method": "bdev_wait_for_examine" 00:10:35.729 } 00:10:35.729 ] 00:10:35.729 } 00:10:35.729 ] 00:10:35.729 } 00:10:35.729 [2024-07-24 04:59:50.274890] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:35.729 [2024-07-24 04:59:50.275046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65899 ] 00:10:35.988 [2024-07-24 04:59:50.456797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.247 [2024-07-24 04:59:50.667953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.507 [2024-07-24 04:59:50.898802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:44.059  Copying: 235/512 [MB] (235 MBps) Copying: 470/512 [MB] (234 MBps) Copying: 512/512 [MB] (average 234 MBps) 00:10:44.059 00:10:44.059 00:10:44.059 real 0m16.702s 00:10:44.059 user 0m15.448s 00:10:44.059 sys 0m1.062s 00:10:44.059 04:59:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.059 04:59:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:10:44.059 ************************************ 00:10:44.060 END TEST dd_malloc_copy 00:10:44.060 ************************************ 00:10:44.060 00:10:44.060 real 0m16.871s 00:10:44.060 user 0m15.512s 00:10:44.060 sys 0m1.171s 00:10:44.060 04:59:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.060 04:59:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:10:44.060 ************************************ 00:10:44.060 END TEST spdk_dd_malloc 00:10:44.060 ************************************ 00:10:44.060 04:59:58 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:10:44.060 04:59:58 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:44.060 04:59:58 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.060 04:59:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:44.060 ************************************ 00:10:44.060 START TEST spdk_dd_bdev_to_bdev 00:10:44.060 ************************************ 00:10:44.060 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:10:44.060 * Looking for test storage... 00:10:44.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:44.319 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:44.319 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.319 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.319 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.319 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:44.320 ************************************ 00:10:44.320 START TEST dd_inflate_file 00:10:44.320 ************************************ 00:10:44.320 04:59:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:10:44.320 [2024-07-24 04:59:58.837814] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:44.320 [2024-07-24 04:59:58.837972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66049 ] 00:10:44.579 [2024-07-24 04:59:59.019490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.838 [2024-07-24 04:59:59.230899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.838 [2024-07-24 04:59:59.463276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:46.529  Copying: 64/64 [MB] (average 1560 MBps) 00:10:46.529 00:10:46.529 00:10:46.529 real 0m2.203s 00:10:46.529 user 0m1.822s 00:10:46.529 sys 0m1.175s 00:10:46.529 05:00:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.529 05:00:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:10:46.529 ************************************ 00:10:46.529 END TEST dd_inflate_file 00:10:46.529 ************************************ 00:10:46.529 05:00:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:10:46.529 05:00:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:10:46.529 05:00:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:10:46.529 05:00:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:10:46.529 05:00:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:46.529 05:00:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:46.529 05:00:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.529 05:00:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:46.529 05:00:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:46.529 ************************************ 00:10:46.529 START TEST dd_copy_to_out_bdev 00:10:46.529 ************************************ 00:10:46.529 05:00:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:10:46.529 { 00:10:46.529 "subsystems": [ 00:10:46.529 { 00:10:46.529 "subsystem": "bdev", 00:10:46.529 "config": [ 00:10:46.529 { 00:10:46.529 "params": { 00:10:46.529 "trtype": "pcie", 00:10:46.529 "traddr": "0000:00:10.0", 00:10:46.529 "name": "Nvme0" 00:10:46.529 }, 00:10:46.529 "method": "bdev_nvme_attach_controller" 00:10:46.529 }, 00:10:46.529 { 00:10:46.529 "params": { 00:10:46.529 "trtype": "pcie", 00:10:46.529 "traddr": "0000:00:11.0", 00:10:46.529 "name": "Nvme1" 00:10:46.529 }, 00:10:46.529 "method": "bdev_nvme_attach_controller" 00:10:46.529 }, 00:10:46.529 { 00:10:46.529 "method": "bdev_wait_for_examine" 00:10:46.529 } 00:10:46.529 ] 00:10:46.529 } 00:10:46.529 ] 00:10:46.529 } 00:10:46.529 [2024-07-24 05:00:01.095172] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:46.529 [2024-07-24 05:00:01.095333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66104 ] 00:10:46.788 [2024-07-24 05:00:01.272513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.047 [2024-07-24 05:00:01.503863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.304 [2024-07-24 05:00:01.743725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:50.071  Copying: 58/64 [MB] (58 MBps) Copying: 64/64 [MB] (average 58 MBps) 00:10:50.071 00:10:50.071 00:10:50.071 real 0m3.455s 00:10:50.071 user 0m3.107s 00:10:50.071 sys 0m2.258s 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:50.071 ************************************ 00:10:50.071 END TEST dd_copy_to_out_bdev 00:10:50.071 ************************************ 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:50.071 ************************************ 00:10:50.071 START TEST dd_offset_magic 00:10:50.071 ************************************ 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:50.071 05:00:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:50.071 { 00:10:50.071 "subsystems": [ 00:10:50.071 { 00:10:50.071 "subsystem": "bdev", 00:10:50.071 "config": [ 00:10:50.071 { 00:10:50.071 "params": { 00:10:50.071 "trtype": "pcie", 00:10:50.071 "traddr": "0000:00:10.0", 00:10:50.071 "name": "Nvme0" 00:10:50.071 }, 00:10:50.071 "method": "bdev_nvme_attach_controller" 00:10:50.071 }, 00:10:50.071 { 00:10:50.071 "params": { 00:10:50.071 "trtype": "pcie", 00:10:50.071 "traddr": "0000:00:11.0", 00:10:50.071 "name": "Nvme1" 00:10:50.071 }, 00:10:50.071 "method": "bdev_nvme_attach_controller" 00:10:50.071 }, 00:10:50.071 { 00:10:50.071 "method": "bdev_wait_for_examine" 00:10:50.071 } 00:10:50.071 ] 00:10:50.071 } 00:10:50.071 ] 00:10:50.071 } 00:10:50.072 [2024-07-24 05:00:04.609143] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:50.072 [2024-07-24 05:00:04.609302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66168 ] 00:10:50.331 [2024-07-24 05:00:04.791116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.589 [2024-07-24 05:00:05.003680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.846 [2024-07-24 05:00:05.226547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:52.039  Copying: 65/65 [MB] (average 747 MBps) 00:10:52.039 00:10:52.039 05:00:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:10:52.039 05:00:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:10:52.039 05:00:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:52.039 05:00:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:52.298 { 00:10:52.298 "subsystems": [ 00:10:52.298 { 00:10:52.298 "subsystem": "bdev", 00:10:52.298 "config": [ 00:10:52.298 { 00:10:52.298 "params": { 00:10:52.298 "trtype": "pcie", 00:10:52.298 "traddr": "0000:00:10.0", 00:10:52.298 "name": "Nvme0" 00:10:52.298 }, 00:10:52.298 "method": "bdev_nvme_attach_controller" 00:10:52.298 }, 00:10:52.298 { 00:10:52.298 "params": { 00:10:52.298 "trtype": "pcie", 00:10:52.298 "traddr": "0000:00:11.0", 00:10:52.298 "name": "Nvme1" 00:10:52.298 }, 00:10:52.298 "method": "bdev_nvme_attach_controller" 00:10:52.298 }, 00:10:52.298 { 00:10:52.298 "method": "bdev_wait_for_examine" 00:10:52.298 } 00:10:52.298 ] 00:10:52.298 } 00:10:52.298 ] 00:10:52.298 } 00:10:52.298 [2024-07-24 05:00:06.775193] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:52.298 [2024-07-24 05:00:06.775357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66200 ] 00:10:52.557 [2024-07-24 05:00:06.955776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.557 [2024-07-24 05:00:07.166151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.815 [2024-07-24 05:00:07.390555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:54.452  Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:54.452 00:10:54.452 05:00:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:10:54.452 05:00:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:10:54.452 05:00:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:10:54.452 05:00:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:10:54.452 05:00:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:10:54.452 05:00:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:54.452 05:00:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:54.452 { 00:10:54.452 "subsystems": [ 00:10:54.452 { 00:10:54.452 "subsystem": "bdev", 00:10:54.452 "config": [ 00:10:54.452 { 00:10:54.452 "params": { 00:10:54.452 "trtype": "pcie", 00:10:54.452 "traddr": "0000:00:10.0", 00:10:54.452 "name": "Nvme0" 00:10:54.452 }, 00:10:54.452 "method": "bdev_nvme_attach_controller" 00:10:54.452 }, 00:10:54.452 { 00:10:54.452 "params": { 00:10:54.452 "trtype": "pcie", 00:10:54.452 "traddr": "0000:00:11.0", 00:10:54.452 "name": "Nvme1" 00:10:54.452 }, 00:10:54.452 "method": "bdev_nvme_attach_controller" 00:10:54.452 }, 00:10:54.452 { 00:10:54.452 "method": "bdev_wait_for_examine" 00:10:54.452 } 00:10:54.452 ] 00:10:54.452 } 00:10:54.452 ] 00:10:54.452 } 00:10:54.452 [2024-07-24 05:00:09.060650] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:54.452 [2024-07-24 05:00:09.060811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66234 ] 00:10:54.717 [2024-07-24 05:00:09.243203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.977 [2024-07-24 05:00:09.455659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.236 [2024-07-24 05:00:09.675364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:56.873  Copying: 65/65 [MB] (average 822 MBps) 00:10:56.873 00:10:56.873 05:00:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:10:56.873 05:00:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:10:56.873 05:00:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:56.873 05:00:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:56.873 { 00:10:56.873 "subsystems": [ 00:10:56.873 { 00:10:56.873 "subsystem": "bdev", 00:10:56.873 "config": [ 00:10:56.873 { 00:10:56.873 "params": { 00:10:56.873 "trtype": "pcie", 00:10:56.873 "traddr": "0000:00:10.0", 00:10:56.873 "name": "Nvme0" 00:10:56.873 }, 00:10:56.873 "method": "bdev_nvme_attach_controller" 00:10:56.873 }, 00:10:56.873 { 00:10:56.873 "params": { 00:10:56.873 "trtype": "pcie", 00:10:56.873 "traddr": "0000:00:11.0", 00:10:56.873 "name": "Nvme1" 00:10:56.873 }, 00:10:56.873 "method": "bdev_nvme_attach_controller" 00:10:56.873 }, 00:10:56.873 { 00:10:56.873 "method": "bdev_wait_for_examine" 00:10:56.873 } 00:10:56.873 ] 00:10:56.873 } 00:10:56.873 ] 00:10:56.873 } 00:10:56.873 [2024-07-24 05:00:11.219852] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:56.873 [2024-07-24 05:00:11.220017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66266 ] 00:10:56.873 [2024-07-24 05:00:11.392226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.141 [2024-07-24 05:00:11.602911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.413 [2024-07-24 05:00:11.838030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:59.047  Copying: 1024/1024 [kB] (average 500 MBps) 00:10:59.047 00:10:59.047 ************************************ 00:10:59.047 END TEST dd_offset_magic 00:10:59.047 ************************************ 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:10:59.047 00:10:59.047 real 0m8.896s 00:10:59.047 user 0m7.554s 00:10:59.047 sys 0m2.829s 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:59.047 05:00:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:59.047 { 00:10:59.047 "subsystems": [ 00:10:59.047 { 00:10:59.047 "subsystem": "bdev", 00:10:59.047 "config": [ 00:10:59.047 { 00:10:59.047 "params": { 00:10:59.047 "trtype": "pcie", 00:10:59.047 "traddr": "0000:00:10.0", 00:10:59.047 "name": "Nvme0" 00:10:59.047 }, 00:10:59.047 "method": "bdev_nvme_attach_controller" 00:10:59.047 }, 00:10:59.047 { 00:10:59.047 "params": { 00:10:59.047 "trtype": "pcie", 00:10:59.047 "traddr": "0000:00:11.0", 00:10:59.047 "name": "Nvme1" 00:10:59.047 }, 00:10:59.047 "method": "bdev_nvme_attach_controller" 00:10:59.047 }, 00:10:59.047 { 00:10:59.047 "method": "bdev_wait_for_examine" 00:10:59.048 } 00:10:59.048 ] 00:10:59.048 } 00:10:59.048 ] 00:10:59.048 } 00:10:59.048 [2024-07-24 05:00:13.560523] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:10:59.048 [2024-07-24 05:00:13.560701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66316 ] 00:10:59.305 [2024-07-24 05:00:13.743848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.564 [2024-07-24 05:00:13.958002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.564 [2024-07-24 05:00:14.189065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:01.066  Copying: 5120/5120 [kB] (average 1000 MBps) 00:11:01.066 00:11:01.066 05:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:11:01.066 05:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:11:01.066 05:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:01.066 05:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:01.066 05:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:01.066 05:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:01.066 05:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:11:01.066 05:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:01.066 05:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:01.066 05:00:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:01.066 { 00:11:01.066 "subsystems": [ 00:11:01.066 { 00:11:01.066 "subsystem": "bdev", 00:11:01.066 "config": [ 00:11:01.066 { 00:11:01.066 "params": { 00:11:01.066 "trtype": "pcie", 00:11:01.066 "traddr": "0000:00:10.0", 00:11:01.066 "name": "Nvme0" 00:11:01.066 }, 00:11:01.066 "method": "bdev_nvme_attach_controller" 00:11:01.066 }, 00:11:01.066 { 00:11:01.066 "params": { 00:11:01.066 "trtype": "pcie", 00:11:01.066 "traddr": "0000:00:11.0", 00:11:01.066 "name": "Nvme1" 00:11:01.066 }, 00:11:01.066 "method": "bdev_nvme_attach_controller" 00:11:01.066 }, 00:11:01.066 { 00:11:01.066 "method": "bdev_wait_for_examine" 00:11:01.066 } 00:11:01.066 ] 00:11:01.066 } 00:11:01.066 ] 00:11:01.066 } 00:11:01.066 [2024-07-24 05:00:15.626531] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:01.066 [2024-07-24 05:00:15.626714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66349 ] 00:11:01.324 [2024-07-24 05:00:15.804242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.582 [2024-07-24 05:00:16.018904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.840 [2024-07-24 05:00:16.250397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:03.480  Copying: 5120/5120 [kB] (average 714 MBps) 00:11:03.480 00:11:03.480 05:00:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:11:03.480 ************************************ 00:11:03.480 END TEST spdk_dd_bdev_to_bdev 00:11:03.480 ************************************ 00:11:03.480 00:11:03.480 real 0m19.245s 00:11:03.480 user 0m16.278s 00:11:03.480 sys 0m8.496s 00:11:03.480 05:00:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:03.480 05:00:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:03.480 05:00:17 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:11:03.480 05:00:17 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:03.480 05:00:17 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:03.480 05:00:17 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.480 05:00:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:03.480 ************************************ 00:11:03.480 START TEST spdk_dd_uring 00:11:03.480 ************************************ 00:11:03.481 05:00:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:03.481 * Looking for test storage... 00:11:03.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:03.481 05:00:17 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:03.481 ************************************ 00:11:03.481 START TEST dd_uring_copy 00:11:03.481 ************************************ 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:03.481 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=vigrndonp1m556kvr1152sq61ujrrj2a03e3vx7lncnandnqlkhbhypwxs5tixu7jq07s8te9viud5z2z4h7utoflmiisu4ieqgw9aelt3samo4lv772h67cv7770gw1kqpffzebbmc7h4emopq7ei2apb3iv2eg7xgkonz554r4zblbqdyrjg3qqydoi6crsnk74w48hukdpilu6syyp3ljtv276n1r4rupzzbcdib7vuiibru8yzk5yywt1uzlclgughkbtqvn0hrph0dk4iwgu7nuxzpdnzdfeubwh9g1tylqi6j6lwhg5zt387kb07i6nccz3v8053c0bxg1lobefy4eu7bastl9v4ddetwdlqj5xgcwhxtgaxnt0jjpdus15m6w2lqktv0gnm9q6kmv9yqz60n1jsgyhtvd8chldsbz2tilyqbs6cl616v97qwvdbrlr1o6jo9hrq8i6x2btv1gm0ushynlx6vqmslann1zmeetwwtpntg8kkwqbw5x1rfuf8sxmsy5dig4b8qv6jpccfccvpindfsmppiaylmm3du48ix5kxq99q1sjb4bzl4zby1pf23mmyuvfta7g1tij31rhx1igtizdd5b2phy59pyoa3qsh13wvd0toiobz1lafnz0ql8g89omsembxbcv0b1d7or0upx9z0i4bpsl8yewrjay0ubuw64kmeo24nfl0ggmndk15m7yqm2ckmmucwcfn2sykas0abdfqu0lgjk052lvds0ra2i1734hzem0bnjh201n1iddwy59v1m4k12xrtx58cfgg2isevlovp71dhzy64v10baf351tg1oeedb0fx0ydgd60n2l3t7yawyw1fnti9a7davy9r42lpeu12aniacokwbymfuccbfodakwpf4tbonefz94hy707go81s48px5wgp2i2sy6z2w80hydaekci5rpdhteqknh1ilvn8zdv1u7tqe45nrhusmq2cytmum0pahhubcnblndarmje1s3t7t 00:11:03.482 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo vigrndonp1m556kvr1152sq61ujrrj2a03e3vx7lncnandnqlkhbhypwxs5tixu7jq07s8te9viud5z2z4h7utoflmiisu4ieqgw9aelt3samo4lv772h67cv7770gw1kqpffzebbmc7h4emopq7ei2apb3iv2eg7xgkonz554r4zblbqdyrjg3qqydoi6crsnk74w48hukdpilu6syyp3ljtv276n1r4rupzzbcdib7vuiibru8yzk5yywt1uzlclgughkbtqvn0hrph0dk4iwgu7nuxzpdnzdfeubwh9g1tylqi6j6lwhg5zt387kb07i6nccz3v8053c0bxg1lobefy4eu7bastl9v4ddetwdlqj5xgcwhxtgaxnt0jjpdus15m6w2lqktv0gnm9q6kmv9yqz60n1jsgyhtvd8chldsbz2tilyqbs6cl616v97qwvdbrlr1o6jo9hrq8i6x2btv1gm0ushynlx6vqmslann1zmeetwwtpntg8kkwqbw5x1rfuf8sxmsy5dig4b8qv6jpccfccvpindfsmppiaylmm3du48ix5kxq99q1sjb4bzl4zby1pf23mmyuvfta7g1tij31rhx1igtizdd5b2phy59pyoa3qsh13wvd0toiobz1lafnz0ql8g89omsembxbcv0b1d7or0upx9z0i4bpsl8yewrjay0ubuw64kmeo24nfl0ggmndk15m7yqm2ckmmucwcfn2sykas0abdfqu0lgjk052lvds0ra2i1734hzem0bnjh201n1iddwy59v1m4k12xrtx58cfgg2isevlovp71dhzy64v10baf351tg1oeedb0fx0ydgd60n2l3t7yawyw1fnti9a7davy9r42lpeu12aniacokwbymfuccbfodakwpf4tbonefz94hy707go81s48px5wgp2i2sy6z2w80hydaekci5rpdhteqknh1ilvn8zdv1u7tqe45nrhusmq2cytmum0pahhubcnblndarmje1s3t7t 00:11:03.482 05:00:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:11:03.746 [2024-07-24 05:00:18.161768] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:03.746 [2024-07-24 05:00:18.162420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66437 ] 00:11:03.746 [2024-07-24 05:00:18.344250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.004 [2024-07-24 05:00:18.560677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.263 [2024-07-24 05:00:18.792478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:08.173  Copying: 511/511 [MB] (average 1809 MBps) 00:11:08.173 00:11:08.173 05:00:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:11:08.173 05:00:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:11:08.174 05:00:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:08.174 05:00:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:08.174 { 00:11:08.174 "subsystems": [ 00:11:08.174 { 00:11:08.174 "subsystem": "bdev", 00:11:08.174 "config": [ 00:11:08.174 { 00:11:08.174 "params": { 00:11:08.174 "block_size": 512, 00:11:08.174 "num_blocks": 1048576, 00:11:08.174 "name": "malloc0" 00:11:08.174 }, 00:11:08.174 "method": "bdev_malloc_create" 00:11:08.174 }, 00:11:08.174 { 00:11:08.174 "params": { 00:11:08.174 "filename": "/dev/zram1", 00:11:08.174 "name": "uring0" 00:11:08.174 }, 00:11:08.174 "method": "bdev_uring_create" 00:11:08.174 }, 00:11:08.174 { 00:11:08.174 "method": "bdev_wait_for_examine" 00:11:08.174 } 00:11:08.174 ] 00:11:08.174 } 00:11:08.174 ] 00:11:08.174 } 00:11:08.174 [2024-07-24 05:00:22.691037] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:08.174 [2024-07-24 05:00:22.691202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66487 ] 00:11:08.432 [2024-07-24 05:00:22.867348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.690 [2024-07-24 05:00:23.096474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.947 [2024-07-24 05:00:23.324982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:14.319  Copying: 247/512 [MB] (247 MBps) Copying: 499/512 [MB] (251 MBps) Copying: 512/512 [MB] (average 249 MBps) 00:11:14.319 00:11:14.319 05:00:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:11:14.319 05:00:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:11:14.319 05:00:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:14.319 05:00:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:14.319 { 00:11:14.319 "subsystems": [ 00:11:14.319 { 00:11:14.319 "subsystem": "bdev", 00:11:14.319 "config": [ 00:11:14.319 { 00:11:14.319 "params": { 00:11:14.319 "block_size": 512, 00:11:14.319 "num_blocks": 1048576, 00:11:14.319 "name": "malloc0" 00:11:14.319 }, 00:11:14.319 "method": "bdev_malloc_create" 00:11:14.319 }, 00:11:14.319 { 00:11:14.319 "params": { 00:11:14.319 "filename": "/dev/zram1", 00:11:14.319 "name": "uring0" 00:11:14.319 }, 00:11:14.319 "method": "bdev_uring_create" 00:11:14.319 }, 00:11:14.319 { 00:11:14.319 "method": "bdev_wait_for_examine" 00:11:14.319 } 00:11:14.319 ] 00:11:14.319 } 00:11:14.319 ] 00:11:14.319 } 00:11:14.319 [2024-07-24 05:00:28.930533] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:14.319 [2024-07-24 05:00:28.930709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66565 ] 00:11:14.578 [2024-07-24 05:00:29.105710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.837 [2024-07-24 05:00:29.322635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.096 [2024-07-24 05:00:29.550833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:21.403  Copying: 196/512 [MB] (196 MBps) Copying: 381/512 [MB] (184 MBps) Copying: 512/512 [MB] (average 176 MBps) 00:11:21.403 00:11:21.403 05:00:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:11:21.403 05:00:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ vigrndonp1m556kvr1152sq61ujrrj2a03e3vx7lncnandnqlkhbhypwxs5tixu7jq07s8te9viud5z2z4h7utoflmiisu4ieqgw9aelt3samo4lv772h67cv7770gw1kqpffzebbmc7h4emopq7ei2apb3iv2eg7xgkonz554r4zblbqdyrjg3qqydoi6crsnk74w48hukdpilu6syyp3ljtv276n1r4rupzzbcdib7vuiibru8yzk5yywt1uzlclgughkbtqvn0hrph0dk4iwgu7nuxzpdnzdfeubwh9g1tylqi6j6lwhg5zt387kb07i6nccz3v8053c0bxg1lobefy4eu7bastl9v4ddetwdlqj5xgcwhxtgaxnt0jjpdus15m6w2lqktv0gnm9q6kmv9yqz60n1jsgyhtvd8chldsbz2tilyqbs6cl616v97qwvdbrlr1o6jo9hrq8i6x2btv1gm0ushynlx6vqmslann1zmeetwwtpntg8kkwqbw5x1rfuf8sxmsy5dig4b8qv6jpccfccvpindfsmppiaylmm3du48ix5kxq99q1sjb4bzl4zby1pf23mmyuvfta7g1tij31rhx1igtizdd5b2phy59pyoa3qsh13wvd0toiobz1lafnz0ql8g89omsembxbcv0b1d7or0upx9z0i4bpsl8yewrjay0ubuw64kmeo24nfl0ggmndk15m7yqm2ckmmucwcfn2sykas0abdfqu0lgjk052lvds0ra2i1734hzem0bnjh201n1iddwy59v1m4k12xrtx58cfgg2isevlovp71dhzy64v10baf351tg1oeedb0fx0ydgd60n2l3t7yawyw1fnti9a7davy9r42lpeu12aniacokwbymfuccbfodakwpf4tbonefz94hy707go81s48px5wgp2i2sy6z2w80hydaekci5rpdhteqknh1ilvn8zdv1u7tqe45nrhusmq2cytmum0pahhubcnblndarmje1s3t7t == \v\i\g\r\n\d\o\n\p\1\m\5\5\6\k\v\r\1\1\5\2\s\q\6\1\u\j\r\r\j\2\a\0\3\e\3\v\x\7\l\n\c\n\a\n\d\n\q\l\k\h\b\h\y\p\w\x\s\5\t\i\x\u\7\j\q\0\7\s\8\t\e\9\v\i\u\d\5\z\2\z\4\h\7\u\t\o\f\l\m\i\i\s\u\4\i\e\q\g\w\9\a\e\l\t\3\s\a\m\o\4\l\v\7\7\2\h\6\7\c\v\7\7\7\0\g\w\1\k\q\p\f\f\z\e\b\b\m\c\7\h\4\e\m\o\p\q\7\e\i\2\a\p\b\3\i\v\2\e\g\7\x\g\k\o\n\z\5\5\4\r\4\z\b\l\b\q\d\y\r\j\g\3\q\q\y\d\o\i\6\c\r\s\n\k\7\4\w\4\8\h\u\k\d\p\i\l\u\6\s\y\y\p\3\l\j\t\v\2\7\6\n\1\r\4\r\u\p\z\z\b\c\d\i\b\7\v\u\i\i\b\r\u\8\y\z\k\5\y\y\w\t\1\u\z\l\c\l\g\u\g\h\k\b\t\q\v\n\0\h\r\p\h\0\d\k\4\i\w\g\u\7\n\u\x\z\p\d\n\z\d\f\e\u\b\w\h\9\g\1\t\y\l\q\i\6\j\6\l\w\h\g\5\z\t\3\8\7\k\b\0\7\i\6\n\c\c\z\3\v\8\0\5\3\c\0\b\x\g\1\l\o\b\e\f\y\4\e\u\7\b\a\s\t\l\9\v\4\d\d\e\t\w\d\l\q\j\5\x\g\c\w\h\x\t\g\a\x\n\t\0\j\j\p\d\u\s\1\5\m\6\w\2\l\q\k\t\v\0\g\n\m\9\q\6\k\m\v\9\y\q\z\6\0\n\1\j\s\g\y\h\t\v\d\8\c\h\l\d\s\b\z\2\t\i\l\y\q\b\s\6\c\l\6\1\6\v\9\7\q\w\v\d\b\r\l\r\1\o\6\j\o\9\h\r\q\8\i\6\x\2\b\t\v\1\g\m\0\u\s\h\y\n\l\x\6\v\q\m\s\l\a\n\n\1\z\m\e\e\t\w\w\t\p\n\t\g\8\k\k\w\q\b\w\5\x\1\r\f\u\f\8\s\x\m\s\y\5\d\i\g\4\b\8\q\v\6\j\p\c\c\f\c\c\v\p\i\n\d\f\s\m\p\p\i\a\y\l\m\m\3\d\u\4\8\i\x\5\k\x\q\9\9\q\1\s\j\b\4\b\z\l\4\z\b\y\1\p\f\2\3\m\m\y\u\v\f\t\a\7\g\1\t\i\j\3\1\r\h\x\1\i\g\t\i\z\d\d\5\b\2\p\h\y\5\9\p\y\o\a\3\q\s\h\1\3\w\v\d\0\t\o\i\o\b\z\1\l\a\f\n\z\0\q\l\8\g\8\9\o\m\s\e\m\b\x\b\c\v\0\b\1\d\7\o\r\0\u\p\x\9\z\0\i\4\b\p\s\l\8\y\e\w\r\j\a\y\0\u\b\u\w\6\4\k\m\e\o\2\4\n\f\l\0\g\g\m\n\d\k\1\5\m\7\y\q\m\2\c\k\m\m\u\c\w\c\f\n\2\s\y\k\a\s\0\a\b\d\f\q\u\0\l\g\j\k\0\5\2\l\v\d\s\0\r\a\2\i\1\7\3\4\h\z\e\m\0\b\n\j\h\2\0\1\n\1\i\d\d\w\y\5\9\v\1\m\4\k\1\2\x\r\t\x\5\8\c\f\g\g\2\i\s\e\v\l\o\v\p\7\1\d\h\z\y\6\4\v\1\0\b\a\f\3\5\1\t\g\1\o\e\e\d\b\0\f\x\0\y\d\g\d\6\0\n\2\l\3\t\7\y\a\w\y\w\1\f\n\t\i\9\a\7\d\a\v\y\9\r\4\2\l\p\e\u\1\2\a\n\i\a\c\o\k\w\b\y\m\f\u\c\c\b\f\o\d\a\k\w\p\f\4\t\b\o\n\e\f\z\9\4\h\y\7\0\7\g\o\8\1\s\4\8\p\x\5\w\g\p\2\i\2\s\y\6\z\2\w\8\0\h\y\d\a\e\k\c\i\5\r\p\d\h\t\e\q\k\n\h\1\i\l\v\n\8\z\d\v\1\u\7\t\q\e\4\5\n\r\h\u\s\m\q\2\c\y\t\m\u\m\0\p\a\h\h\u\b\c\n\b\l\n\d\a\r\m\j\e\1\s\3\t\7\t ]] 00:11:21.403 05:00:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:11:21.403 05:00:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ vigrndonp1m556kvr1152sq61ujrrj2a03e3vx7lncnandnqlkhbhypwxs5tixu7jq07s8te9viud5z2z4h7utoflmiisu4ieqgw9aelt3samo4lv772h67cv7770gw1kqpffzebbmc7h4emopq7ei2apb3iv2eg7xgkonz554r4zblbqdyrjg3qqydoi6crsnk74w48hukdpilu6syyp3ljtv276n1r4rupzzbcdib7vuiibru8yzk5yywt1uzlclgughkbtqvn0hrph0dk4iwgu7nuxzpdnzdfeubwh9g1tylqi6j6lwhg5zt387kb07i6nccz3v8053c0bxg1lobefy4eu7bastl9v4ddetwdlqj5xgcwhxtgaxnt0jjpdus15m6w2lqktv0gnm9q6kmv9yqz60n1jsgyhtvd8chldsbz2tilyqbs6cl616v97qwvdbrlr1o6jo9hrq8i6x2btv1gm0ushynlx6vqmslann1zmeetwwtpntg8kkwqbw5x1rfuf8sxmsy5dig4b8qv6jpccfccvpindfsmppiaylmm3du48ix5kxq99q1sjb4bzl4zby1pf23mmyuvfta7g1tij31rhx1igtizdd5b2phy59pyoa3qsh13wvd0toiobz1lafnz0ql8g89omsembxbcv0b1d7or0upx9z0i4bpsl8yewrjay0ubuw64kmeo24nfl0ggmndk15m7yqm2ckmmucwcfn2sykas0abdfqu0lgjk052lvds0ra2i1734hzem0bnjh201n1iddwy59v1m4k12xrtx58cfgg2isevlovp71dhzy64v10baf351tg1oeedb0fx0ydgd60n2l3t7yawyw1fnti9a7davy9r42lpeu12aniacokwbymfuccbfodakwpf4tbonefz94hy707go81s48px5wgp2i2sy6z2w80hydaekci5rpdhteqknh1ilvn8zdv1u7tqe45nrhusmq2cytmum0pahhubcnblndarmje1s3t7t == \v\i\g\r\n\d\o\n\p\1\m\5\5\6\k\v\r\1\1\5\2\s\q\6\1\u\j\r\r\j\2\a\0\3\e\3\v\x\7\l\n\c\n\a\n\d\n\q\l\k\h\b\h\y\p\w\x\s\5\t\i\x\u\7\j\q\0\7\s\8\t\e\9\v\i\u\d\5\z\2\z\4\h\7\u\t\o\f\l\m\i\i\s\u\4\i\e\q\g\w\9\a\e\l\t\3\s\a\m\o\4\l\v\7\7\2\h\6\7\c\v\7\7\7\0\g\w\1\k\q\p\f\f\z\e\b\b\m\c\7\h\4\e\m\o\p\q\7\e\i\2\a\p\b\3\i\v\2\e\g\7\x\g\k\o\n\z\5\5\4\r\4\z\b\l\b\q\d\y\r\j\g\3\q\q\y\d\o\i\6\c\r\s\n\k\7\4\w\4\8\h\u\k\d\p\i\l\u\6\s\y\y\p\3\l\j\t\v\2\7\6\n\1\r\4\r\u\p\z\z\b\c\d\i\b\7\v\u\i\i\b\r\u\8\y\z\k\5\y\y\w\t\1\u\z\l\c\l\g\u\g\h\k\b\t\q\v\n\0\h\r\p\h\0\d\k\4\i\w\g\u\7\n\u\x\z\p\d\n\z\d\f\e\u\b\w\h\9\g\1\t\y\l\q\i\6\j\6\l\w\h\g\5\z\t\3\8\7\k\b\0\7\i\6\n\c\c\z\3\v\8\0\5\3\c\0\b\x\g\1\l\o\b\e\f\y\4\e\u\7\b\a\s\t\l\9\v\4\d\d\e\t\w\d\l\q\j\5\x\g\c\w\h\x\t\g\a\x\n\t\0\j\j\p\d\u\s\1\5\m\6\w\2\l\q\k\t\v\0\g\n\m\9\q\6\k\m\v\9\y\q\z\6\0\n\1\j\s\g\y\h\t\v\d\8\c\h\l\d\s\b\z\2\t\i\l\y\q\b\s\6\c\l\6\1\6\v\9\7\q\w\v\d\b\r\l\r\1\o\6\j\o\9\h\r\q\8\i\6\x\2\b\t\v\1\g\m\0\u\s\h\y\n\l\x\6\v\q\m\s\l\a\n\n\1\z\m\e\e\t\w\w\t\p\n\t\g\8\k\k\w\q\b\w\5\x\1\r\f\u\f\8\s\x\m\s\y\5\d\i\g\4\b\8\q\v\6\j\p\c\c\f\c\c\v\p\i\n\d\f\s\m\p\p\i\a\y\l\m\m\3\d\u\4\8\i\x\5\k\x\q\9\9\q\1\s\j\b\4\b\z\l\4\z\b\y\1\p\f\2\3\m\m\y\u\v\f\t\a\7\g\1\t\i\j\3\1\r\h\x\1\i\g\t\i\z\d\d\5\b\2\p\h\y\5\9\p\y\o\a\3\q\s\h\1\3\w\v\d\0\t\o\i\o\b\z\1\l\a\f\n\z\0\q\l\8\g\8\9\o\m\s\e\m\b\x\b\c\v\0\b\1\d\7\o\r\0\u\p\x\9\z\0\i\4\b\p\s\l\8\y\e\w\r\j\a\y\0\u\b\u\w\6\4\k\m\e\o\2\4\n\f\l\0\g\g\m\n\d\k\1\5\m\7\y\q\m\2\c\k\m\m\u\c\w\c\f\n\2\s\y\k\a\s\0\a\b\d\f\q\u\0\l\g\j\k\0\5\2\l\v\d\s\0\r\a\2\i\1\7\3\4\h\z\e\m\0\b\n\j\h\2\0\1\n\1\i\d\d\w\y\5\9\v\1\m\4\k\1\2\x\r\t\x\5\8\c\f\g\g\2\i\s\e\v\l\o\v\p\7\1\d\h\z\y\6\4\v\1\0\b\a\f\3\5\1\t\g\1\o\e\e\d\b\0\f\x\0\y\d\g\d\6\0\n\2\l\3\t\7\y\a\w\y\w\1\f\n\t\i\9\a\7\d\a\v\y\9\r\4\2\l\p\e\u\1\2\a\n\i\a\c\o\k\w\b\y\m\f\u\c\c\b\f\o\d\a\k\w\p\f\4\t\b\o\n\e\f\z\9\4\h\y\7\0\7\g\o\8\1\s\4\8\p\x\5\w\g\p\2\i\2\s\y\6\z\2\w\8\0\h\y\d\a\e\k\c\i\5\r\p\d\h\t\e\q\k\n\h\1\i\l\v\n\8\z\d\v\1\u\7\t\q\e\4\5\n\r\h\u\s\m\q\2\c\y\t\m\u\m\0\p\a\h\h\u\b\c\n\b\l\n\d\a\r\m\j\e\1\s\3\t\7\t ]] 00:11:21.403 05:00:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:21.986 05:00:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:11:21.986 05:00:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:11:21.986 05:00:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:21.986 05:00:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:21.986 { 00:11:21.986 "subsystems": [ 00:11:21.986 { 00:11:21.986 "subsystem": "bdev", 00:11:21.986 "config": [ 00:11:21.986 { 00:11:21.986 "params": { 00:11:21.986 "block_size": 512, 00:11:21.986 "num_blocks": 1048576, 00:11:21.986 "name": "malloc0" 00:11:21.986 }, 00:11:21.986 "method": "bdev_malloc_create" 00:11:21.986 }, 00:11:21.986 { 00:11:21.986 "params": { 00:11:21.986 "filename": "/dev/zram1", 00:11:21.986 "name": "uring0" 00:11:21.986 }, 00:11:21.986 "method": "bdev_uring_create" 00:11:21.986 }, 00:11:21.986 { 00:11:21.986 "method": "bdev_wait_for_examine" 00:11:21.986 } 00:11:21.986 ] 00:11:21.986 } 00:11:21.986 ] 00:11:21.986 } 00:11:21.986 [2024-07-24 05:00:36.373657] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:21.986 [2024-07-24 05:00:36.373766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66699 ] 00:11:21.986 [2024-07-24 05:00:36.532816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.278 [2024-07-24 05:00:36.751880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.537 [2024-07-24 05:00:36.977193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:29.234  Copying: 184/512 [MB] (184 MBps) Copying: 369/512 [MB] (184 MBps) Copying: 512/512 [MB] (average 184 MBps) 00:11:29.234 00:11:29.234 05:00:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:11:29.234 05:00:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:11:29.234 05:00:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:11:29.234 05:00:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:11:29.234 05:00:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:11:29.234 05:00:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:11:29.234 05:00:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:29.234 05:00:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:29.234 { 00:11:29.234 "subsystems": [ 00:11:29.234 { 00:11:29.234 "subsystem": "bdev", 00:11:29.234 "config": [ 00:11:29.234 { 00:11:29.234 "params": { 00:11:29.234 "block_size": 512, 00:11:29.234 "num_blocks": 1048576, 00:11:29.234 "name": "malloc0" 00:11:29.234 }, 00:11:29.234 "method": "bdev_malloc_create" 00:11:29.234 }, 00:11:29.234 { 00:11:29.234 "params": { 00:11:29.234 "filename": "/dev/zram1", 00:11:29.234 "name": "uring0" 00:11:29.234 }, 00:11:29.234 "method": "bdev_uring_create" 00:11:29.234 }, 00:11:29.234 { 00:11:29.234 "params": { 00:11:29.234 "name": "uring0" 00:11:29.234 }, 00:11:29.234 "method": "bdev_uring_delete" 00:11:29.234 }, 00:11:29.234 { 00:11:29.234 "method": "bdev_wait_for_examine" 00:11:29.234 } 00:11:29.234 ] 00:11:29.234 } 00:11:29.234 ] 00:11:29.234 } 00:11:29.234 [2024-07-24 05:00:43.292461] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:29.234 [2024-07-24 05:00:43.292639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66781 ] 00:11:29.234 [2024-07-24 05:00:43.464051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.234 [2024-07-24 05:00:43.671102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.493 [2024-07-24 05:00:43.896752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:33.345  Copying: 0/0 [B] (average 0 Bps) 00:11:33.345 00:11:33.345 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:11:33.345 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:33.345 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:11:33.345 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:33.345 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:11:33.345 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:33.345 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:33.345 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.345 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.345 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.345 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.346 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.346 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.346 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.346 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:33.346 05:00:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:33.346 { 00:11:33.346 "subsystems": [ 00:11:33.346 { 00:11:33.346 "subsystem": "bdev", 00:11:33.346 "config": [ 00:11:33.346 { 00:11:33.346 "params": { 00:11:33.346 "block_size": 512, 00:11:33.346 "num_blocks": 1048576, 00:11:33.346 "name": "malloc0" 00:11:33.346 }, 00:11:33.346 "method": "bdev_malloc_create" 00:11:33.346 }, 00:11:33.346 { 00:11:33.346 "params": { 00:11:33.346 "filename": "/dev/zram1", 00:11:33.346 "name": "uring0" 00:11:33.346 }, 00:11:33.346 "method": "bdev_uring_create" 00:11:33.346 }, 00:11:33.346 { 00:11:33.346 "params": { 00:11:33.346 "name": "uring0" 00:11:33.346 }, 00:11:33.346 "method": "bdev_uring_delete" 00:11:33.346 }, 00:11:33.346 { 00:11:33.346 "method": "bdev_wait_for_examine" 00:11:33.346 } 00:11:33.346 ] 00:11:33.346 } 00:11:33.346 ] 00:11:33.346 } 00:11:33.346 [2024-07-24 05:00:47.396859] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:33.346 [2024-07-24 05:00:47.397031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66840 ] 00:11:33.346 [2024-07-24 05:00:47.568897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.346 [2024-07-24 05:00:47.776013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.605 [2024-07-24 05:00:48.007770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:34.171 [2024-07-24 05:00:48.757445] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:11:34.171 [2024-07-24 05:00:48.757500] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:11:34.171 [2024-07-24 05:00:48.757533] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:11:34.171 [2024-07-24 05:00:48.757568] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:36.723 [2024-07-24 05:00:51.004132] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:36.982 05:00:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:11:36.982 05:00:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:36.982 05:00:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:11:36.982 05:00:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:11:36.982 05:00:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:11:36.982 05:00:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:36.982 05:00:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:11:36.982 05:00:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:11:36.982 05:00:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:11:36.982 05:00:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:11:36.982 05:00:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:11:36.982 05:00:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:37.241 00:11:37.241 real 0m33.763s 00:11:37.241 user 0m28.390s 00:11:37.241 sys 0m14.440s 00:11:37.241 05:00:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.241 05:00:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:37.241 ************************************ 00:11:37.241 END TEST dd_uring_copy 00:11:37.241 ************************************ 00:11:37.241 ************************************ 00:11:37.241 END TEST spdk_dd_uring 00:11:37.241 ************************************ 00:11:37.241 00:11:37.241 real 0m33.919s 00:11:37.241 user 0m28.454s 00:11:37.241 sys 0m14.537s 00:11:37.241 05:00:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.241 05:00:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:37.241 05:00:51 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:11:37.241 05:00:51 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:37.241 05:00:51 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.241 05:00:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:37.241 ************************************ 00:11:37.241 START TEST spdk_dd_sparse 00:11:37.241 ************************************ 00:11:37.241 05:00:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:11:37.501 * Looking for test storage... 00:11:37.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:11:37.501 1+0 records in 00:11:37.501 1+0 records out 00:11:37.501 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0076179 s, 551 MB/s 00:11:37.501 05:00:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:11:37.501 1+0 records in 00:11:37.501 1+0 records out 00:11:37.501 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0088153 s, 476 MB/s 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:11:37.501 1+0 records in 00:11:37.501 1+0 records out 00:11:37.501 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00620281 s, 676 MB/s 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:37.501 ************************************ 00:11:37.501 START TEST dd_sparse_file_to_file 00:11:37.501 ************************************ 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:11:37.501 05:00:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:37.501 { 00:11:37.501 "subsystems": [ 00:11:37.501 { 00:11:37.501 "subsystem": "bdev", 00:11:37.501 "config": [ 00:11:37.501 { 00:11:37.501 "params": { 00:11:37.501 "block_size": 4096, 00:11:37.501 "filename": "dd_sparse_aio_disk", 00:11:37.501 "name": "dd_aio" 00:11:37.501 }, 00:11:37.501 "method": "bdev_aio_create" 00:11:37.501 }, 00:11:37.501 { 00:11:37.501 "params": { 00:11:37.501 "lvs_name": "dd_lvstore", 00:11:37.501 "bdev_name": "dd_aio" 00:11:37.501 }, 00:11:37.501 "method": "bdev_lvol_create_lvstore" 00:11:37.501 }, 00:11:37.501 { 00:11:37.501 "method": "bdev_wait_for_examine" 00:11:37.501 } 00:11:37.501 ] 00:11:37.501 } 00:11:37.501 ] 00:11:37.501 } 00:11:37.761 [2024-07-24 05:00:52.150893] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:37.761 [2024-07-24 05:00:52.151049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66973 ] 00:11:37.761 [2024-07-24 05:00:52.332580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.020 [2024-07-24 05:00:52.544184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.278 [2024-07-24 05:00:52.780841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:39.915  Copying: 12/36 [MB] (average 923 MBps) 00:11:39.915 00:11:39.915 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:11:39.915 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:11:39.915 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:39.916 00:11:39.916 real 0m2.364s 00:11:39.916 user 0m1.976s 00:11:39.916 sys 0m1.176s 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:39.916 ************************************ 00:11:39.916 END TEST dd_sparse_file_to_file 00:11:39.916 ************************************ 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:39.916 ************************************ 00:11:39.916 START TEST dd_sparse_file_to_bdev 00:11:39.916 ************************************ 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:39.916 05:00:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:39.916 { 00:11:39.916 "subsystems": [ 00:11:39.916 { 00:11:39.916 "subsystem": "bdev", 00:11:39.916 "config": [ 00:11:39.916 { 00:11:39.916 "params": { 00:11:39.916 "block_size": 4096, 00:11:39.916 "filename": "dd_sparse_aio_disk", 00:11:39.916 "name": "dd_aio" 00:11:39.916 }, 00:11:39.916 "method": "bdev_aio_create" 00:11:39.916 }, 00:11:39.916 { 00:11:39.916 "params": { 00:11:39.916 "lvs_name": "dd_lvstore", 00:11:39.916 "lvol_name": "dd_lvol", 00:11:39.916 "size_in_mib": 36, 00:11:39.916 "thin_provision": true 00:11:39.916 }, 00:11:39.916 "method": "bdev_lvol_create" 00:11:39.916 }, 00:11:39.916 { 00:11:39.916 "method": "bdev_wait_for_examine" 00:11:39.916 } 00:11:39.916 ] 00:11:39.916 } 00:11:39.916 ] 00:11:39.916 } 00:11:39.916 [2024-07-24 05:00:54.537967] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:39.916 [2024-07-24 05:00:54.538101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67033 ] 00:11:40.175 [2024-07-24 05:00:54.696463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.434 [2024-07-24 05:00:54.907054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.693 [2024-07-24 05:00:55.138729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:42.331  Copying: 12/36 [MB] (average 428 MBps) 00:11:42.331 00:11:42.331 00:11:42.331 real 0m2.257s 00:11:42.331 user 0m1.930s 00:11:42.331 sys 0m1.129s 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:42.331 ************************************ 00:11:42.331 END TEST dd_sparse_file_to_bdev 00:11:42.331 ************************************ 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:42.331 ************************************ 00:11:42.331 START TEST dd_sparse_bdev_to_file 00:11:42.331 ************************************ 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:11:42.331 05:00:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:42.331 { 00:11:42.331 "subsystems": [ 00:11:42.331 { 00:11:42.331 "subsystem": "bdev", 00:11:42.331 "config": [ 00:11:42.331 { 00:11:42.331 "params": { 00:11:42.331 "block_size": 4096, 00:11:42.331 "filename": "dd_sparse_aio_disk", 00:11:42.331 "name": "dd_aio" 00:11:42.331 }, 00:11:42.331 "method": "bdev_aio_create" 00:11:42.331 }, 00:11:42.331 { 00:11:42.331 "method": "bdev_wait_for_examine" 00:11:42.331 } 00:11:42.331 ] 00:11:42.331 } 00:11:42.331 ] 00:11:42.331 } 00:11:42.331 [2024-07-24 05:00:56.886700] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:42.331 [2024-07-24 05:00:56.886872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67089 ] 00:11:42.590 [2024-07-24 05:00:57.067966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.849 [2024-07-24 05:00:57.281369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.108 [2024-07-24 05:00:57.520376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:44.487  Copying: 12/36 [MB] (average 857 MBps) 00:11:44.487 00:11:44.487 05:00:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:11:44.487 05:00:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:11:44.487 05:00:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:11:44.487 05:00:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:11:44.487 05:00:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:44.487 05:00:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:11:44.487 05:00:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:11:44.487 05:00:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:11:44.487 ************************************ 00:11:44.487 END TEST dd_sparse_bdev_to_file 00:11:44.487 ************************************ 00:11:44.487 05:00:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:11:44.487 05:00:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:44.487 00:11:44.487 real 0m2.333s 00:11:44.487 user 0m1.940s 00:11:44.487 sys 0m1.191s 00:11:44.487 05:00:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.487 05:00:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:44.747 05:00:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:11:44.747 05:00:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:11:44.747 05:00:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:11:44.747 05:00:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:11:44.747 05:00:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:11:44.747 00:11:44.747 real 0m7.316s 00:11:44.747 user 0m5.949s 00:11:44.747 sys 0m3.737s 00:11:44.747 05:00:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.747 05:00:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:44.747 ************************************ 00:11:44.747 END TEST spdk_dd_sparse 00:11:44.747 ************************************ 00:11:44.747 05:00:59 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:44.747 05:00:59 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:44.747 05:00:59 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.747 05:00:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:44.747 ************************************ 00:11:44.747 START TEST spdk_dd_negative 00:11:44.747 ************************************ 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:44.747 * Looking for test storage... 00:11:44.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:44.747 ************************************ 00:11:44.747 START TEST dd_invalid_arguments 00:11:44.747 ************************************ 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:44.747 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.748 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.748 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.748 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.748 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.748 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.748 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.748 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:44.748 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:45.008 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:11:45.008 00:11:45.008 CPU options: 00:11:45.008 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:11:45.008 (like [0,1,10]) 00:11:45.008 --lcores lcore to CPU mapping list. The list is in the format: 00:11:45.008 [<,lcores[@CPUs]>...] 00:11:45.008 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:11:45.008 Within the group, '-' is used for range separator, 00:11:45.008 ',' is used for single number separator. 00:11:45.008 '( )' can be omitted for single element group, 00:11:45.008 '@' can be omitted if cpus and lcores have the same value 00:11:45.008 --disable-cpumask-locks Disable CPU core lock files. 00:11:45.008 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:11:45.008 pollers in the app support interrupt mode) 00:11:45.008 -p, --main-core main (primary) core for DPDK 00:11:45.008 00:11:45.008 Configuration options: 00:11:45.008 -c, --config, --json JSON config file 00:11:45.008 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:11:45.008 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:11:45.008 --wait-for-rpc wait for RPCs to initialize subsystems 00:11:45.008 --rpcs-allowed comma-separated list of permitted RPCS 00:11:45.008 --json-ignore-init-errors don't exit on invalid config entry 00:11:45.008 00:11:45.008 Memory options: 00:11:45.008 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:11:45.008 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:11:45.008 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:11:45.008 -R, --huge-unlink unlink huge files after initialization 00:11:45.008 -n, --mem-channels number of memory channels used for DPDK 00:11:45.008 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:11:45.008 --msg-mempool-size global message memory pool size in count (default: 262143) 00:11:45.008 --no-huge run without using hugepages 00:11:45.008 -i, --shm-id shared memory ID (optional) 00:11:45.008 -g, --single-file-segments force creating just one hugetlbfs file 00:11:45.008 00:11:45.008 PCI options: 00:11:45.008 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:11:45.008 -B, --pci-blocked pci addr to block (can be used more than once) 00:11:45.008 -u, --no-pci disable PCI access 00:11:45.008 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:11:45.008 00:11:45.008 Log options: 00:11:45.008 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:11:45.008 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:11:45.008 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:11:45.008 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:11:45.008 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:11:45.008 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:11:45.008 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:11:45.008 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:11:45.008 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:11:45.008 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:11:45.008 virtio_vfio_user, vmd) 00:11:45.008 --silence-noticelog disable notice level logging to stderr 00:11:45.008 00:11:45.008 Trace options: 00:11:45.008 --num-trace-entries number of trace entries for each core, must be power of 2, 00:11:45.008 setting 0 to disable trace (default 32768) 00:11:45.008 Tracepoints vary in size and can use more than one trace entry. 00:11:45.008 -e, --tpoint-group [:] 00:11:45.008 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:11:45.008 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:11:45.008 [2024-07-24 05:00:59.475101] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:11:45.008 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:11:45.008 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:11:45.008 a tracepoint group. First tpoint inside a group can be enabled by 00:11:45.008 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:11:45.008 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:11:45.008 in /include/spdk_internal/trace_defs.h 00:11:45.008 00:11:45.008 Other options: 00:11:45.008 -h, --help show this usage 00:11:45.008 -v, --version print SPDK version 00:11:45.008 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:11:45.008 --env-context Opaque context for use of the env implementation 00:11:45.008 00:11:45.008 Application specific: 00:11:45.008 [--------- DD Options ---------] 00:11:45.008 --if Input file. Must specify either --if or --ib. 00:11:45.008 --ib Input bdev. Must specifier either --if or --ib 00:11:45.008 --of Output file. Must specify either --of or --ob. 00:11:45.008 --ob Output bdev. Must specify either --of or --ob. 00:11:45.008 --iflag Input file flags. 00:11:45.008 --oflag Output file flags. 00:11:45.008 --bs I/O unit size (default: 4096) 00:11:45.008 --qd Queue depth (default: 2) 00:11:45.008 --count I/O unit count. The number of I/O units to copy. (default: all) 00:11:45.008 --skip Skip this many I/O units at start of input. (default: 0) 00:11:45.008 --seek Skip this many I/O units at start of output. (default: 0) 00:11:45.008 --aio Force usage of AIO. (by default io_uring is used if available) 00:11:45.008 --sparse Enable hole skipping in input target 00:11:45.008 Available iflag and oflag values: 00:11:45.008 append - append mode 00:11:45.008 direct - use direct I/O for data 00:11:45.008 directory - fail unless a directory 00:11:45.008 dsync - use synchronized I/O for data 00:11:45.008 noatime - do not update access time 00:11:45.008 noctty - do not assign controlling terminal from file 00:11:45.008 nofollow - do not follow symlinks 00:11:45.008 nonblock - use non-blocking I/O 00:11:45.008 sync - use synchronized I/O for data and metadata 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:45.008 00:11:45.008 real 0m0.179s 00:11:45.008 user 0m0.083s 00:11:45.008 sys 0m0.094s 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:11:45.008 ************************************ 00:11:45.008 END TEST dd_invalid_arguments 00:11:45.008 ************************************ 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:45.008 ************************************ 00:11:45.008 START TEST dd_double_input 00:11:45.008 ************************************ 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.008 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.009 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.009 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.009 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.009 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:45.009 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:45.268 [2024-07-24 05:00:59.701906] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:45.268 00:11:45.268 real 0m0.171s 00:11:45.268 user 0m0.087s 00:11:45.268 sys 0m0.083s 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.268 ************************************ 00:11:45.268 END TEST dd_double_input 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:11:45.268 ************************************ 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:45.268 ************************************ 00:11:45.268 START TEST dd_double_output 00:11:45.268 ************************************ 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:45.268 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:45.528 [2024-07-24 05:00:59.929533] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:11:45.528 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:11:45.528 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:45.528 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:45.528 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:45.528 00:11:45.528 real 0m0.168s 00:11:45.528 user 0m0.071s 00:11:45.528 sys 0m0.095s 00:11:45.528 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.528 ************************************ 00:11:45.528 END TEST dd_double_output 00:11:45.528 ************************************ 00:11:45.528 05:00:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:45.528 ************************************ 00:11:45.528 START TEST dd_no_input 00:11:45.528 ************************************ 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:45.528 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:45.528 [2024-07-24 05:01:00.154498] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:45.786 00:11:45.786 real 0m0.177s 00:11:45.786 user 0m0.080s 00:11:45.786 sys 0m0.095s 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.786 ************************************ 00:11:45.786 END TEST dd_no_input 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:11:45.786 ************************************ 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:45.786 ************************************ 00:11:45.786 START TEST dd_no_output 00:11:45.786 ************************************ 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.786 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.787 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:45.787 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:45.787 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:45.787 [2024-07-24 05:01:00.393934] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:46.102 00:11:46.102 real 0m0.179s 00:11:46.102 user 0m0.086s 00:11:46.102 sys 0m0.091s 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:11:46.102 ************************************ 00:11:46.102 END TEST dd_no_output 00:11:46.102 ************************************ 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:46.102 ************************************ 00:11:46.102 START TEST dd_wrong_blocksize 00:11:46.102 ************************************ 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:46.102 [2024-07-24 05:01:00.631051] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:46.102 00:11:46.102 real 0m0.171s 00:11:46.102 user 0m0.083s 00:11:46.102 sys 0m0.084s 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.102 05:01:00 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:11:46.102 ************************************ 00:11:46.102 END TEST dd_wrong_blocksize 00:11:46.102 ************************************ 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:46.361 ************************************ 00:11:46.361 START TEST dd_smaller_blocksize 00:11:46.361 ************************************ 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:46.361 05:01:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:46.361 [2024-07-24 05:01:00.867747] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:46.361 [2024-07-24 05:01:00.867902] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67336 ] 00:11:46.620 [2024-07-24 05:01:01.053664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.880 [2024-07-24 05:01:01.380843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.139 [2024-07-24 05:01:01.606719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:47.398 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:47.657 [2024-07-24 05:01:02.093236] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:11:47.657 [2024-07-24 05:01:02.093319] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:48.595 [2024-07-24 05:01:02.913565] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:48.858 00:11:48.858 real 0m2.610s 00:11:48.858 user 0m1.969s 00:11:48.858 sys 0m0.527s 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.858 ************************************ 00:11:48.858 END TEST dd_smaller_blocksize 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:11:48.858 ************************************ 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:48.858 ************************************ 00:11:48.858 START TEST dd_invalid_count 00:11:48.858 ************************************ 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:48.858 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:49.118 [2024-07-24 05:01:03.509396] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:49.118 00:11:49.118 real 0m0.132s 00:11:49.118 user 0m0.065s 00:11:49.118 sys 0m0.065s 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:49.118 ************************************ 00:11:49.118 END TEST dd_invalid_count 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:11:49.118 ************************************ 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:49.118 ************************************ 00:11:49.118 START TEST dd_invalid_oflag 00:11:49.118 ************************************ 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:49.118 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:49.118 [2024-07-24 05:01:03.730076] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:49.378 00:11:49.378 real 0m0.176s 00:11:49.378 user 0m0.085s 00:11:49.378 sys 0m0.089s 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:11:49.378 ************************************ 00:11:49.378 END TEST dd_invalid_oflag 00:11:49.378 ************************************ 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:49.378 ************************************ 00:11:49.378 START TEST dd_invalid_iflag 00:11:49.378 ************************************ 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:49.378 05:01:03 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:49.378 [2024-07-24 05:01:03.973161] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:49.638 00:11:49.638 real 0m0.185s 00:11:49.638 user 0m0.082s 00:11:49.638 sys 0m0.102s 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:49.638 ************************************ 00:11:49.638 END TEST dd_invalid_iflag 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:11:49.638 ************************************ 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:49.638 ************************************ 00:11:49.638 START TEST dd_unknown_flag 00:11:49.638 ************************************ 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:49.638 05:01:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:49.638 [2024-07-24 05:01:04.214098] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:49.638 [2024-07-24 05:01:04.214270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67460 ] 00:11:49.897 [2024-07-24 05:01:04.395791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.156 [2024-07-24 05:01:04.610012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.415 [2024-07-24 05:01:04.835482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:50.416 [2024-07-24 05:01:04.947864] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:11:50.416 [2024-07-24 05:01:04.947922] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:50.416 [2024-07-24 05:01:04.947987] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:11:50.416 [2024-07-24 05:01:04.948000] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:50.416 [2024-07-24 05:01:04.948223] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:11:50.416 [2024-07-24 05:01:04.948242] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:50.416 [2024-07-24 05:01:04.948300] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:11:50.416 [2024-07-24 05:01:04.948311] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:11:51.352 [2024-07-24 05:01:05.757809] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:51.610 05:01:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:11:51.610 05:01:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:51.610 05:01:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:11:51.610 05:01:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:11:51.610 05:01:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:11:51.610 05:01:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:51.610 00:11:51.610 real 0m2.117s 00:11:51.610 user 0m1.747s 00:11:51.610 sys 0m0.265s 00:11:51.610 05:01:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:51.610 ************************************ 00:11:51.610 05:01:06 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:11:51.610 END TEST dd_unknown_flag 00:11:51.610 ************************************ 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:51.869 ************************************ 00:11:51.869 START TEST dd_invalid_json 00:11:51.869 ************************************ 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:51.869 05:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:51.869 [2024-07-24 05:01:06.348415] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:51.869 [2024-07-24 05:01:06.348520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67505 ] 00:11:52.127 [2024-07-24 05:01:06.508646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.127 [2024-07-24 05:01:06.722394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.127 [2024-07-24 05:01:06.722468] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:11:52.127 [2024-07-24 05:01:06.722493] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:52.127 [2024-07-24 05:01:06.722505] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:52.127 [2024-07-24 05:01:06.722575] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:52.693 05:01:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:11:52.693 05:01:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:52.693 05:01:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:11:52.693 05:01:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:11:52.693 05:01:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:11:52.693 05:01:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:52.693 00:11:52.693 real 0m0.889s 00:11:52.693 user 0m0.642s 00:11:52.693 sys 0m0.144s 00:11:52.694 05:01:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.694 05:01:07 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:11:52.694 ************************************ 00:11:52.694 END TEST dd_invalid_json 00:11:52.694 ************************************ 00:11:52.694 00:11:52.694 real 0m7.959s 00:11:52.694 user 0m5.351s 00:11:52.694 sys 0m2.249s 00:11:52.694 05:01:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.694 05:01:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:52.694 ************************************ 00:11:52.694 END TEST spdk_dd_negative 00:11:52.694 ************************************ 00:11:52.694 00:11:52.694 real 3m26.927s 00:11:52.694 user 2m50.696s 00:11:52.694 sys 1m10.770s 00:11:52.694 05:01:07 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.694 ************************************ 00:11:52.694 END TEST spdk_dd 00:11:52.694 ************************************ 00:11:52.694 05:01:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:52.694 05:01:07 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:11:52.694 05:01:07 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:11:52.694 05:01:07 -- spdk/autotest.sh@260 -- # timing_exit lib 00:11:52.694 05:01:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:52.694 05:01:07 -- common/autotest_common.sh@10 -- # set +x 00:11:52.952 05:01:07 -- spdk/autotest.sh@262 -- # '[' 1 -eq 1 ']' 00:11:52.952 05:01:07 -- spdk/autotest.sh@263 -- # run_test iscsi_tgt /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:11:52.952 05:01:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:52.952 05:01:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.952 05:01:07 -- common/autotest_common.sh@10 -- # set +x 00:11:52.952 ************************************ 00:11:52.952 START TEST iscsi_tgt 00:11:52.952 ************************************ 00:11:52.952 05:01:07 iscsi_tgt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:11:52.952 * Looking for test storage... 00:11:52.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # uname -s 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@18 -- # iscsicleanup 00:11:52.952 Cleaning up iSCSI connection 00:11:52.952 05:01:07 iscsi_tgt -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:11:52.952 05:01:07 iscsi_tgt -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:11:52.952 iscsiadm: No matching sessions found 00:11:52.952 05:01:07 iscsi_tgt -- common/autotest_common.sh@981 -- # true 00:11:52.952 05:01:07 iscsi_tgt -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:11:52.952 iscsiadm: No records found 00:11:52.952 05:01:07 iscsi_tgt -- common/autotest_common.sh@982 -- # true 00:11:52.952 05:01:07 iscsi_tgt -- common/autotest_common.sh@983 -- # rm -rf 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@21 -- # create_veth_interfaces 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # ip link set init_br nomaster 00:11:52.952 Cannot find device "init_br" 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # true 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # ip link set tgt_br nomaster 00:11:52.952 Cannot find device "tgt_br" 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # true 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # ip link set tgt_br2 nomaster 00:11:52.952 Cannot find device "tgt_br2" 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # true 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # ip link set init_br down 00:11:52.952 Cannot find device "init_br" 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # true 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # ip link set tgt_br down 00:11:52.952 Cannot find device "tgt_br" 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # true 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # ip link set tgt_br2 down 00:11:52.952 Cannot find device "tgt_br2" 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # true 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # ip link delete iscsi_br type bridge 00:11:52.952 Cannot find device "iscsi_br" 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # true 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # ip link delete spdk_init_int 00:11:52.952 Cannot find device "spdk_init_int" 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # true 00:11:52.952 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:11:52.952 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # true 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:11:53.211 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # true 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # ip netns del spdk_iscsi_ns 00:11:53.211 Cannot remove namespace file "/var/run/netns/spdk_iscsi_ns": No such file or directory 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # true 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@44 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@47 -- # ip netns add spdk_iscsi_ns 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@50 -- # ip link add spdk_init_int type veth peer name init_br 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@51 -- # ip link add spdk_tgt_int type veth peer name tgt_br 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@52 -- # ip link add spdk_tgt_int2 type veth peer name tgt_br2 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@55 -- # ip link set spdk_tgt_int netns spdk_iscsi_ns 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@56 -- # ip link set spdk_tgt_int2 netns spdk_iscsi_ns 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@59 -- # ip addr add 10.0.0.2/24 dev spdk_init_int 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@60 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.1/24 dev spdk_tgt_int 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@61 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.3/24 dev spdk_tgt_int2 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@64 -- # ip link set spdk_init_int up 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@65 -- # ip link set init_br up 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@66 -- # ip link set tgt_br up 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@67 -- # ip link set tgt_br2 up 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@68 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int up 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@69 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int2 up 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@70 -- # ip netns exec spdk_iscsi_ns ip link set lo up 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@73 -- # ip link add iscsi_br type bridge 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@74 -- # ip link set iscsi_br up 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@77 -- # ip link set init_br master iscsi_br 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@78 -- # ip link set tgt_br master iscsi_br 00:11:53.211 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@79 -- # ip link set tgt_br2 master iscsi_br 00:11:53.470 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@82 -- # iptables -I INPUT 1 -i spdk_init_int -p tcp --dport 3260 -j ACCEPT 00:11:53.471 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@83 -- # iptables -A FORWARD -i iscsi_br -o iscsi_br -j ACCEPT 00:11:53.471 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@86 -- # ping -c 1 10.0.0.1 00:11:53.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:11:53.471 00:11:53.471 --- 10.0.0.1 ping statistics --- 00:11:53.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.471 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:11:53.471 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@87 -- # ping -c 1 10.0.0.3 00:11:53.471 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:53.471 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:11:53.471 00:11:53.471 --- 10.0.0.3 ping statistics --- 00:11:53.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.471 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:53.471 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@88 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:11:53.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.022 ms 00:11:53.471 00:11:53.471 --- 10.0.0.2 ping statistics --- 00:11:53.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.471 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:53.471 05:01:07 iscsi_tgt -- iscsi_tgt/common.sh@89 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:11:53.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:11:53.471 00:11:53.471 --- 10.0.0.2 ping statistics --- 00:11:53.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.471 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:53.471 05:01:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@23 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:11:53.471 05:01:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@25 -- # run_test iscsi_tgt_sock /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:11:53.471 05:01:07 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:53.471 05:01:07 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.471 05:01:07 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:53.471 ************************************ 00:11:53.471 START TEST iscsi_tgt_sock 00:11:53.471 ************************************ 00:11:53.471 05:01:07 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:11:53.471 * Looking for test storage... 00:11:53.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@48 -- # iscsitestinit 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@50 -- # HELLO_SOCK_APP='ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/examples/hello_sock' 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@51 -- # SOCAT_APP=socat 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@52 -- # OPENSSL_APP=openssl 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@53 -- # PSK='-N ssl -E 1234567890ABCDEF -I psk.spdk.io' 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@58 -- # timing_enter sock_client 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:11:53.471 Testing client path 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@59 -- # echo 'Testing client path' 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@63 -- # server_pid=67757 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@64 -- # trap 'killprocess $server_pid;iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@66 -- # waitfortcp 67757 10.0.0.2:3260 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@25 -- # local addr=10.0.0.2:3260 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@27 -- # echo 'Waiting for process to start up and listen on address 10.0.0.2:3260...' 00:11:53.471 Waiting for process to start up and listen on address 10.0.0.2:3260... 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@29 -- # xtrace_disable 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:11:53.471 05:01:08 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@62 -- # socat tcp-l:3260,fork,bind=10.0.0.2 exec:/bin/cat 00:11:54.039 [2024-07-24 05:01:08.592323] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:54.039 [2024-07-24 05:01:08.592477] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67772 ] 00:11:54.298 [2024-07-24 05:01:08.781201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.557 [2024-07-24 05:01:09.080805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.557 [2024-07-24 05:01:09.080891] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:11:54.557 [2024-07-24 05:01:09.080926] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:11:54.557 [2024-07-24 05:01:09.081084] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 48728) 00:11:54.557 [2024-07-24 05:01:09.081167] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:11:55.492 [2024-07-24 05:01:10.081202] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:11:55.492 [2024-07-24 05:01:10.081370] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:11:56.060 [2024-07-24 05:01:10.560962] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:56.060 [2024-07-24 05:01:10.561117] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67798 ] 00:11:56.319 [2024-07-24 05:01:10.744006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.589 [2024-07-24 05:01:10.972954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.589 [2024-07-24 05:01:10.973053] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:11:56.589 [2024-07-24 05:01:10.973085] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:11:56.589 [2024-07-24 05:01:10.973241] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 48734) 00:11:56.589 [2024-07-24 05:01:10.973322] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:11:57.543 [2024-07-24 05:01:11.973357] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:11:57.543 [2024-07-24 05:01:11.973530] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:11:58.110 [2024-07-24 05:01:12.444353] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:58.110 [2024-07-24 05:01:12.444479] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67829 ] 00:11:58.110 [2024-07-24 05:01:12.603885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.368 [2024-07-24 05:01:12.829023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.368 [2024-07-24 05:01:12.829113] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:11:58.368 [2024-07-24 05:01:12.829153] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:11:58.368 [2024-07-24 05:01:12.829447] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 44226) 00:11:58.368 [2024-07-24 05:01:12.829566] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:11:59.306 [2024-07-24 05:01:13.829603] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:11:59.306 [2024-07-24 05:01:13.829758] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:11:59.875 killing process with pid 67757 00:11:59.875 Testing SSL server path 00:11:59.875 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:11:59.875 [2024-07-24 05:01:14.401881] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:11:59.875 [2024-07-24 05:01:14.402041] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67879 ] 00:12:00.134 [2024-07-24 05:01:14.587531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.393 [2024-07-24 05:01:14.805073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.393 [2024-07-24 05:01:14.805152] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:12:00.393 [2024-07-24 05:01:14.805245] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(ssl) 00:12:00.393 [2024-07-24 05:01:14.921655] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:00.393 [2024-07-24 05:01:14.921804] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67884 ] 00:12:00.653 [2024-07-24 05:01:15.109455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.912 [2024-07-24 05:01:15.403313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.912 [2024-07-24 05:01:15.403407] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:12:00.912 [2024-07-24 05:01:15.403440] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:12:00.912 [2024-07-24 05:01:15.408410] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 39376) 00:12:00.912 [2024-07-24 05:01:15.408806] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 39376) to (10.0.0.1, 3260) 00:12:00.912 [2024-07-24 05:01:15.411469] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:12:01.849 [2024-07-24 05:01:16.411523] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:12:01.849 [2024-07-24 05:01:16.411695] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:12:01.849 [2024-07-24 05:01:16.411813] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:12:02.417 [2024-07-24 05:01:16.901478] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:02.418 [2024-07-24 05:01:16.901643] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67913 ] 00:12:02.677 [2024-07-24 05:01:17.082908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.936 [2024-07-24 05:01:17.319777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.936 [2024-07-24 05:01:17.319849] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:12:02.936 [2024-07-24 05:01:17.319882] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:12:02.936 [2024-07-24 05:01:17.321040] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 39380) to (10.0.0.1, 3260) 00:12:02.936 [2024-07-24 05:01:17.324714] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 39380) 00:12:02.936 [2024-07-24 05:01:17.327238] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:12:03.874 [2024-07-24 05:01:18.327293] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:12:03.874 [2024-07-24 05:01:18.327436] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:12:03.874 [2024-07-24 05:01:18.327566] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:12:04.442 [2024-07-24 05:01:18.827885] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:04.442 [2024-07-24 05:01:18.828045] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67941 ] 00:12:04.442 [2024-07-24 05:01:19.008227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.701 [2024-07-24 05:01:19.231038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.701 [2024-07-24 05:01:19.231123] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:12:04.701 [2024-07-24 05:01:19.231157] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:12:04.701 [2024-07-24 05:01:19.232749] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 39396) to (10.0.0.1, 3260) 00:12:04.701 [2024-07-24 05:01:19.235804] posix.c: 755:posix_sock_create_ssl_context: *ERROR*: Incorrect TLS version provided: 7 00:12:04.701 [2024-07-24 05:01:19.235907] posix.c:1033:posix_sock_create: *ERROR*: posix_sock_create_ssl_context() failed, errno = 2 00:12:04.701 [2024-07-24 05:01:19.235956] hello_sock.c: 309:hello_sock_connect: *ERROR*: connect error(2): No such file or directory 00:12:04.701 [2024-07-24 05:01:19.235969] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:04.701 [2024-07-24 05:01:19.236032] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:12:04.701 [2024-07-24 05:01:19.236044] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:12:04.701 [2024-07-24 05:01:19.236098] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:12:05.270 [2024-07-24 05:01:19.727274] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:05.270 [2024-07-24 05:01:19.727430] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67957 ] 00:12:05.529 [2024-07-24 05:01:19.908728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.529 [2024-07-24 05:01:20.147347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.529 [2024-07-24 05:01:20.147432] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:12:05.529 [2024-07-24 05:01:20.147482] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:12:05.529 [2024-07-24 05:01:20.149571] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 39400) to (10.0.0.1, 3260) 00:12:05.529 [2024-07-24 05:01:20.152263] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 39400) 00:12:05.529 [2024-07-24 05:01:20.154813] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:12:06.907 [2024-07-24 05:01:21.154870] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:12:06.907 [2024-07-24 05:01:21.155031] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:12:06.907 [2024-07-24 05:01:21.155160] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:12:07.166 SSL_connect:before SSL initialization 00:12:07.166 SSL_connect:SSLv3/TLS write client hello 00:12:07.166 [2024-07-24 05:01:21.660004] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 33804) to (10.0.0.1, 3260) 00:12:07.166 SSL_connect:SSLv3/TLS write client hello 00:12:07.166 SSL_connect:SSLv3/TLS read server hello 00:12:07.166 Can't use SSL_get_servername 00:12:07.166 SSL_connect:TLSv1.3 read encrypted extensions 00:12:07.166 SSL_connect:SSLv3/TLS read finished 00:12:07.166 SSL_connect:SSLv3/TLS write change cipher spec 00:12:07.166 SSL_connect:SSLv3/TLS write finished 00:12:07.166 SSL_connect:SSL negotiation finished successfully 00:12:07.166 SSL_connect:SSL negotiation finished successfully 00:12:07.166 SSL_connect:SSLv3/TLS read server session ticket 00:12:09.070 DONE 00:12:09.070 SSL3 alert write:warning:close notify 00:12:09.070 [2024-07-24 05:01:23.604185] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:12:09.070 [2024-07-24 05:01:23.673440] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:09.070 [2024-07-24 05:01:23.673614] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68008 ] 00:12:09.329 [2024-07-24 05:01:23.861061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.588 [2024-07-24 05:01:24.140557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.588 [2024-07-24 05:01:24.140801] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:12:09.588 [2024-07-24 05:01:24.140954] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:12:09.588 [2024-07-24 05:01:24.142482] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 34748) to (10.0.0.1, 3260) 00:12:09.588 [2024-07-24 05:01:24.145759] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 34748) 00:12:09.588 [2024-07-24 05:01:24.147109] hello_sock.c: 240:hello_sock_writev_poll: *ERROR*: Write to socket failed. Closing connection... 00:12:09.588 [2024-07-24 05:01:24.147120] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:12:09.588 [2024-07-24 05:01:24.147272] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:10.525 [2024-07-24 05:01:25.147259] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:12:10.525 [2024-07-24 05:01:25.147606] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:10.525 [2024-07-24 05:01:25.147811] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:12:10.525 [2024-07-24 05:01:25.147856] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:12:11.094 [2024-07-24 05:01:25.640166] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:11.094 [2024-07-24 05:01:25.640553] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68034 ] 00:12:11.352 [2024-07-24 05:01:25.818999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.611 [2024-07-24 05:01:26.043679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.611 [2024-07-24 05:01:26.043994] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:12:11.611 [2024-07-24 05:01:26.044038] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:12:11.611 [2024-07-24 05:01:26.045354] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 34754) to (10.0.0.1, 3260) 00:12:11.611 [2024-07-24 05:01:26.048793] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 34754) 00:12:11.611 [2024-07-24 05:01:26.049764] posix.c: 586:posix_sock_psk_find_session_server_cb: *ERROR*: Unknown Client's PSK ID 00:12:11.611 [2024-07-24 05:01:26.049836] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:12:11.611 [2024-07-24 05:01:26.049867] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:12.547 [2024-07-24 05:01:27.049863] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:12:12.547 [2024-07-24 05:01:27.050089] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:12.547 [2024-07-24 05:01:27.050150] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:12:12.547 [2024-07-24 05:01:27.050165] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:12:13.115 killing process with pid 67879 00:12:14.050 [2024-07-24 05:01:28.514791] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:12:14.050 [2024-07-24 05:01:28.515112] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:12:14.617 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:12:14.617 [2024-07-24 05:01:29.037464] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:14.617 [2024-07-24 05:01:29.037601] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68092 ] 00:12:14.617 [2024-07-24 05:01:29.198005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.874 [2024-07-24 05:01:29.413717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.874 [2024-07-24 05:01:29.413803] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:12:14.874 [2024-07-24 05:01:29.413899] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(posix) 00:12:15.133 [2024-07-24 05:01:29.519251] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 33808) to (10.0.0.1, 3260) 00:12:15.133 [2024-07-24 05:01:29.519411] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:12:15.133 killing process with pid 68092 00:12:16.069 [2024-07-24 05:01:30.550260] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:12:16.069 [2024-07-24 05:01:30.550708] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:12:16.635 ************************************ 00:12:16.635 END TEST iscsi_tgt_sock 00:12:16.635 ************************************ 00:12:16.635 00:12:16.635 real 0m23.116s 00:12:16.635 user 0m28.723s 00:12:16.635 sys 0m3.409s 00:12:16.635 05:01:31 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.635 05:01:31 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:12:16.635 05:01:31 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@26 -- # [[ -d /usr/local/calsoft ]] 00:12:16.635 05:01:31 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@27 -- # run_test iscsi_tgt_calsoft /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:12:16.635 05:01:31 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:16.635 05:01:31 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.635 05:01:31 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:16.635 ************************************ 00:12:16.635 START TEST iscsi_tgt_calsoft 00:12:16.635 ************************************ 00:12:16.635 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:12:16.635 * Looking for test storage... 00:12:16.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft 00:12:16.635 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:16.635 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:16.635 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:16.635 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:16.635 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:16.635 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:16.635 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:16.635 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@15 -- # MALLOC_BDEV_SIZE=64 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@16 -- # MALLOC_BLOCK_SIZE=512 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@18 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@19 -- # calsoft_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@22 -- # mkdir -p /usr/local/etc 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@23 -- # cp /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/its.conf /usr/local/etc/ 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@26 -- # echo IP=10.0.0.1 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@28 -- # timing_enter start_iscsi_tgt 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@30 -- # iscsitestinit 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@33 -- # pid=68185 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@34 -- # echo 'Process pid: 68185' 00:12:16.636 Process pid: 68185 00:12:16.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@32 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x1 --wait-for-rpc 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@36 -- # trap 'killprocess $pid; delete_tmp_conf_files; iscsitestfini; exit 1 ' SIGINT SIGTERM EXIT 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@38 -- # waitforlisten 68185 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@829 -- # '[' -z 68185 ']' 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:16.636 05:01:31 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:12:16.894 [2024-07-24 05:01:31.337674] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:16.894 [2024-07-24 05:01:31.338086] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68185 ] 00:12:16.894 [2024-07-24 05:01:31.519987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.152 [2024-07-24 05:01:31.735534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.720 05:01:32 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.720 05:01:32 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@862 -- # return 0 00:12:17.720 05:01:32 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:12:17.979 05:01:32 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:18.238 [2024-07-24 05:01:32.868045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:19.190 iscsi_tgt is listening. Running tests... 00:12:19.190 05:01:33 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@41 -- # echo 'iscsi_tgt is listening. Running tests...' 00:12:19.190 05:01:33 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@43 -- # timing_exit start_iscsi_tgt 00:12:19.190 05:01:33 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:19.190 05:01:33 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:12:19.190 05:01:33 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_auth_group 1 -c 'user:root secret:tester' 00:12:19.190 05:01:33 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_discovery_auth -g 1 00:12:19.447 05:01:34 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:12:19.706 05:01:34 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:12:19.964 05:01:34 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create -b MyBdev 64 512 00:12:20.223 MyBdev 00:12:20.224 05:01:34 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -g 1 00:12:20.482 05:01:34 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@55 -- # sleep 1 00:12:21.420 05:01:35 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@57 -- # '[' '' ']' 00:12:21.420 05:01:35 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py /home/vagrant/spdk_repo/spdk/../output 00:12:21.420 [2024-07-24 05:01:36.036140] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:21.679 [2024-07-24 05:01:36.058981] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:21.679 [2024-07-24 05:01:36.059081] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:21.679 [2024-07-24 05:01:36.082284] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:12:21.679 [2024-07-24 05:01:36.124987] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:21.679 [2024-07-24 05:01:36.125086] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:21.679 [2024-07-24 05:01:36.145317] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:21.679 [2024-07-24 05:01:36.163356] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:12:21.679 [2024-07-24 05:01:36.201993] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:21.679 [2024-07-24 05:01:36.202121] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:21.679 [2024-07-24 05:01:36.224188] iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 2745410467, and the dataout task tag is 2728567458 00:12:21.679 [2024-07-24 05:01:36.224323] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:12:21.679 [2024-07-24 05:01:36.224547] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:12:21.679 [2024-07-24 05:01:36.224614] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:12:21.679 [2024-07-24 05:01:36.259984] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:21.679 [2024-07-24 05:01:36.260087] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:12:21.679 [2024-07-24 05:01:36.260378] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:12:21.679 [2024-07-24 05:01:36.299701] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(2) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:12:21.679 [2024-07-24 05:01:36.299775] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:21.679 [2024-07-24 05:01:36.300016] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:21.938 [2024-07-24 05:01:36.359735] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:21.938 [2024-07-24 05:01:36.359831] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:21.938 [2024-07-24 05:01:36.380118] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:21.938 [2024-07-24 05:01:36.380313] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:21.938 [2024-07-24 05:01:36.401572] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:21.938 [2024-07-24 05:01:36.401805] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:21.938 [2024-07-24 05:01:36.439178] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:21.938 [2024-07-24 05:01:36.439295] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:21.938 [2024-07-24 05:01:36.461548] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:21.938 [2024-07-24 05:01:36.461759] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:21.938 [2024-07-24 05:01:36.483438] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:12:21.939 [2024-07-24 05:01:36.503283] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:21.939 [2024-07-24 05:01:36.503404] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:22.198 [2024-07-24 05:01:36.598786] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:12:22.198 [2024-07-24 05:01:36.639693] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:12:22.198 [2024-07-24 05:01:36.660709] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:22.198 [2024-07-24 05:01:36.660935] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:22.198 [2024-07-24 05:01:36.683432] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:22.198 [2024-07-24 05:01:36.726354] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:22.198 [2024-07-24 05:01:36.726457] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:22.198 [2024-07-24 05:01:36.747519] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:22.198 [2024-07-24 05:01:36.766169] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:22.198 [2024-07-24 05:01:36.787574] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:12:22.198 [2024-07-24 05:01:36.807833] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:22.198 [2024-07-24 05:01:36.807930] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:22.458 [2024-07-24 05:01:36.843075] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:22.458 [2024-07-24 05:01:36.862031] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:12:22.458 [2024-07-24 05:01:36.905563] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:22.458 [2024-07-24 05:01:36.944696] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:22.458 [2024-07-24 05:01:36.982603] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:22.458 [2024-07-24 05:01:36.982701] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:22.458 [2024-07-24 05:01:37.021138] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:12:22.458 [2024-07-24 05:01:37.021249] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:22.458 [2024-07-24 05:01:37.021435] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:12:22.458 [2024-07-24 05:01:37.021514] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=6, MaxCmdSN=67) 00:12:22.458 [2024-07-24 05:01:37.022015] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:12:22.458 [2024-07-24 05:01:37.041380] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:12:22.458 [2024-07-24 05:01:37.078749] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:22.458 [2024-07-24 05:01:37.078843] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:22.717 [2024-07-24 05:01:37.259395] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:22.717 [2024-07-24 05:01:37.259524] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:22.717 [2024-07-24 05:01:37.302557] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:22.717 [2024-07-24 05:01:37.302654] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:22.717 [2024-07-24 05:01:37.343470] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:12:22.976 [2024-07-24 05:01:37.361633] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:22.976 [2024-07-24 05:01:37.361676] iscsi.c:3961:iscsi_handle_recovery_datain: *ERROR*: Initiator requests BegRun: 0x00000000, RunLength:0x00001000 greater than maximum DataSN: 0x00000004. 00:12:22.976 [2024-07-24 05:01:37.361694] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=10) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:12:22.976 [2024-07-24 05:01:37.361706] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:12:22.976 [2024-07-24 05:01:37.378031] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:22.976 [2024-07-24 05:01:37.378138] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:22.976 [2024-07-24 05:01:37.397258] iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:12:22.976 [2024-07-24 05:01:37.397301] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:12:22.976 [2024-07-24 05:01:37.397315] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:12:22.976 [2024-07-24 05:01:37.434569] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:22.976 [2024-07-24 05:01:37.434671] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:22.976 [2024-07-24 05:01:37.470836] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:22.976 [2024-07-24 05:01:37.470933] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:22.976 [2024-07-24 05:01:37.490120] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:22.976 [2024-07-24 05:01:37.490216] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:22.976 [2024-07-24 05:01:37.548424] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:22.976 [2024-07-24 05:01:37.569022] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:22.976 [2024-07-24 05:01:37.569118] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:23.236 [2024-07-24 05:01:37.640662] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:23.236 [2024-07-24 05:01:37.640773] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:23.236 [2024-07-24 05:01:37.659300] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:23.236 [2024-07-24 05:01:37.659428] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:23.236 [2024-07-24 05:01:37.736730] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:23.236 [2024-07-24 05:01:37.736950] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:23.236 [2024-07-24 05:01:37.756795] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:23.236 [2024-07-24 05:01:37.776671] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:12:23.236 [2024-07-24 05:01:37.796032] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:23.236 [2024-07-24 05:01:37.796143] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:23.236 [2024-07-24 05:01:37.830002] iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:12:23.236 [2024-07-24 05:01:37.830106] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:12:23.495 [2024-07-24 05:01:37.914600] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:12:23.495 [2024-07-24 05:01:37.978611] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:23.495 [2024-07-24 05:01:38.038929] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:12:23.495 [2024-07-24 05:01:38.059517] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:12:23.495 [2024-07-24 05:01:38.095440] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:12:23.495 PDU 00:12:23.495 00000000 01 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:12:23.495 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:12:23.495 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:12:23.495 [2024-07-24 05:01:38.095525] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:12:23.754 [2024-07-24 05:01:38.220167] param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 276 00:12:23.754 [2024-07-24 05:01:38.220209] iscsi.c:1303:iscsi_op_login_store_incoming_params: *ERROR*: iscsi_parse_params() failed 00:12:23.754 [2024-07-24 05:01:38.236501] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key ImmediateDataa 00:12:23.754 [2024-07-24 05:01:38.274187] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:23.754 [2024-07-24 05:01:38.312742] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:12:23.754 [2024-07-24 05:01:38.330582] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:24.013 [2024-07-24 05:01:38.488191] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:24.013 [2024-07-24 05:01:38.488645] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:24.013 [2024-07-24 05:01:38.523618] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:12:24.013 [2024-07-24 05:01:38.584713] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:12:24.013 [2024-07-24 05:01:38.604112] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:24.014 [2024-07-24 05:01:38.604208] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:24.273 [2024-07-24 05:01:38.877934] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:12:24.532 [2024-07-24 05:01:38.930845] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:12:24.532 [2024-07-24 05:01:38.951831] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:12:24.532 [2024-07-24 05:01:38.971516] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:24.532 [2024-07-24 05:01:38.987407] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:12:24.532 [2024-07-24 05:01:39.008016] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(3) error ExpCmdSN=4 00:12:24.532 [2024-07-24 05:01:39.008148] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:12:24.532 [2024-07-24 05:01:39.047651] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:12:24.532 [2024-07-24 05:01:39.069067] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:24.532 [2024-07-24 05:01:39.069164] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:24.532 [2024-07-24 05:01:39.088337] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=ffffffff 00:12:24.532 [2024-07-24 05:01:39.107181] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:12:24.532 PDU 00:12:24.532 00000000 00 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:12:24.532 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:12:24.532 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:12:24.532 [2024-07-24 05:01:39.107238] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:12:24.532 [2024-07-24 05:01:39.126145] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:12:24.532 [2024-07-24 05:01:39.143965] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:24.791 [2024-07-24 05:01:39.166260] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:24.791 [2024-07-24 05:01:39.166357] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:24.791 [2024-07-24 05:01:39.185052] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:24.791 [2024-07-24 05:01:39.185148] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:24.791 [2024-07-24 05:01:39.246858] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:24.791 [2024-07-24 05:01:39.269600] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:24.791 [2024-07-24 05:01:39.345806] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:12:24.791 [2024-07-24 05:01:39.402006] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:12:25.050 [2024-07-24 05:01:39.425488] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:12:25.050 [2024-07-24 05:01:39.444582] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:25.050 [2024-07-24 05:01:39.444678] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:25.050 [2024-07-24 05:01:39.464417] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:25.050 [2024-07-24 05:01:39.464514] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:25.050 [2024-07-24 05:01:39.483964] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(341) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:12:25.050 [2024-07-24 05:01:39.484076] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(8) ignore (ExpCmdSN=9, MaxCmdSN=71) 00:12:25.050 [2024-07-24 05:01:39.484604] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:12:25.050 [2024-07-24 05:01:39.527132] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:12:25.050 [2024-07-24 05:01:39.549614] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:12:25.050 [2024-07-24 05:01:39.549702] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:12:25.050 [2024-07-24 05:01:39.568795] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:12:25.050 [2024-07-24 05:01:39.588946] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:25.050 [2024-07-24 05:01:39.589047] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:25.309 [2024-07-24 05:01:39.683617] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:12:27.214 [2024-07-24 05:01:41.643743] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:12:28.150 [2024-07-24 05:01:42.684247] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:12:29.084 [2024-07-24 05:01:43.667488] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=6, MaxCmdSN=68) 00:12:29.084 [2024-07-24 05:01:43.667940] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=7 00:12:29.084 [2024-07-24 05:01:43.684486] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=5, MaxCmdSN=68) 00:12:30.479 [2024-07-24 05:01:44.684799] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=6, MaxCmdSN=69) 00:12:30.479 [2024-07-24 05:01:44.684973] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=7, MaxCmdSN=70) 00:12:30.480 [2024-07-24 05:01:44.684995] iscsi.c:4028:iscsi_handle_status_snack: *ERROR*: Unable to find StatSN: 0x00000007. For a StatusSNACK, assuming this is a proactive SNACK for an untransmitted StatSN, ignoring. 00:12:30.480 [2024-07-24 05:01:44.685015] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=8 00:12:42.686 [2024-07-24 05:01:56.733075] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:12:42.686 [2024-07-24 05:01:56.753811] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:12:42.686 [2024-07-24 05:01:56.773278] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:12:42.686 [2024-07-24 05:01:56.773829] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:12:42.687 [2024-07-24 05:01:56.794952] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:12:42.687 [2024-07-24 05:01:56.813865] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:12:42.687 [2024-07-24 05:01:56.837918] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:12:42.687 [2024-07-24 05:01:56.877289] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=64 00:12:42.687 [2024-07-24 05:01:56.878873] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:12:42.687 [2024-07-24 05:01:56.899717] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1107296256) error ExpCmdSN=66 00:12:42.687 [2024-07-24 05:01:56.917803] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:12:42.687 [2024-07-24 05:01:56.940932] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=67 00:12:42.687 Skipping tc_ffp_15_2. It is known to fail. 00:12:42.687 Skipping tc_ffp_29_2. It is known to fail. 00:12:42.687 Skipping tc_ffp_29_3. It is known to fail. 00:12:42.687 Skipping tc_ffp_29_4. It is known to fail. 00:12:42.687 Skipping tc_err_1_1. It is known to fail. 00:12:42.687 Skipping tc_err_1_2. It is known to fail. 00:12:42.687 Skipping tc_err_2_8. It is known to fail. 00:12:42.687 Skipping tc_err_3_1. It is known to fail. 00:12:42.687 Skipping tc_err_3_2. It is known to fail. 00:12:42.687 Skipping tc_err_3_3. It is known to fail. 00:12:42.687 Skipping tc_err_3_4. It is known to fail. 00:12:42.687 Skipping tc_err_5_1. It is known to fail. 00:12:42.687 Skipping tc_login_3_1. It is known to fail. 00:12:42.687 Skipping tc_login_11_2. It is known to fail. 00:12:42.687 Skipping tc_login_11_4. It is known to fail. 00:12:42.687 Skipping tc_login_2_2. It is known to fail. 00:12:42.687 Skipping tc_login_29_1. It is known to fail. 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@62 -- # failed=0 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@67 -- # iscsicleanup 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:12:42.687 Cleaning up iSCSI connection 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:12:42.687 iscsiadm: No matching sessions found 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@981 -- # true 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:12:42.687 iscsiadm: No records found 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # true 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@983 -- # rm -rf 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@68 -- # killprocess 68185 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@948 -- # '[' -z 68185 ']' 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@952 -- # kill -0 68185 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@953 -- # uname 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68185 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:42.687 killing process with pid 68185 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68185' 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@967 -- # kill 68185 00:12:42.687 05:01:57 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@972 -- # wait 68185 00:12:45.224 05:01:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@69 -- # delete_tmp_conf_files 00:12:45.224 05:01:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@12 -- # rm -f /usr/local/etc/its.conf 00:12:45.224 05:01:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@70 -- # iscsitestfini 00:12:45.224 05:01:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:45.224 05:01:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@71 -- # exit 0 00:12:45.224 ************************************ 00:12:45.224 END TEST iscsi_tgt_calsoft 00:12:45.224 ************************************ 00:12:45.224 00:12:45.224 real 0m28.615s 00:12:45.224 user 0m41.910s 00:12:45.224 sys 0m5.276s 00:12:45.224 05:01:59 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:45.224 05:01:59 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:12:45.224 05:01:59 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@31 -- # run_test iscsi_tgt_filesystem /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:12:45.224 05:01:59 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:45.224 05:01:59 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.224 05:01:59 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:45.224 ************************************ 00:12:45.224 START TEST iscsi_tgt_filesystem 00:12:45.224 ************************************ 00:12:45.224 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:12:45.486 * Looking for test storage... 00:12:45.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/setup/common.sh 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:45.486 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=y 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:45.487 #define SPDK_CONFIG_H 00:12:45.487 #define SPDK_CONFIG_APPS 1 00:12:45.487 #define SPDK_CONFIG_ARCH native 00:12:45.487 #define SPDK_CONFIG_ASAN 1 00:12:45.487 #undef SPDK_CONFIG_AVAHI 00:12:45.487 #undef SPDK_CONFIG_CET 00:12:45.487 #define SPDK_CONFIG_COVERAGE 1 00:12:45.487 #define SPDK_CONFIG_CROSS_PREFIX 00:12:45.487 #undef SPDK_CONFIG_CRYPTO 00:12:45.487 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:45.487 #undef SPDK_CONFIG_CUSTOMOCF 00:12:45.487 #undef SPDK_CONFIG_DAOS 00:12:45.487 #define SPDK_CONFIG_DAOS_DIR 00:12:45.487 #define SPDK_CONFIG_DEBUG 1 00:12:45.487 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:45.487 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:45.487 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:45.487 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:45.487 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:45.487 #undef SPDK_CONFIG_DPDK_UADK 00:12:45.487 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:45.487 #define SPDK_CONFIG_EXAMPLES 1 00:12:45.487 #undef SPDK_CONFIG_FC 00:12:45.487 #define SPDK_CONFIG_FC_PATH 00:12:45.487 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:45.487 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:45.487 #undef SPDK_CONFIG_FUSE 00:12:45.487 #undef SPDK_CONFIG_FUZZER 00:12:45.487 #define SPDK_CONFIG_FUZZER_LIB 00:12:45.487 #undef SPDK_CONFIG_GOLANG 00:12:45.487 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:45.487 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:45.487 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:45.487 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:45.487 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:45.487 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:45.487 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:45.487 #define SPDK_CONFIG_IDXD 1 00:12:45.487 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:45.487 #undef SPDK_CONFIG_IPSEC_MB 00:12:45.487 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:45.487 #define SPDK_CONFIG_ISAL 1 00:12:45.487 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:45.487 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:45.487 #define SPDK_CONFIG_LIBDIR 00:12:45.487 #undef SPDK_CONFIG_LTO 00:12:45.487 #define SPDK_CONFIG_MAX_LCORES 128 00:12:45.487 #define SPDK_CONFIG_NVME_CUSE 1 00:12:45.487 #undef SPDK_CONFIG_OCF 00:12:45.487 #define SPDK_CONFIG_OCF_PATH 00:12:45.487 #define SPDK_CONFIG_OPENSSL_PATH 00:12:45.487 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:45.487 #define SPDK_CONFIG_PGO_DIR 00:12:45.487 #undef SPDK_CONFIG_PGO_USE 00:12:45.487 #define SPDK_CONFIG_PREFIX /usr/local 00:12:45.487 #undef SPDK_CONFIG_RAID5F 00:12:45.487 #undef SPDK_CONFIG_RBD 00:12:45.487 #define SPDK_CONFIG_RDMA 1 00:12:45.487 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:45.487 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:45.487 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:45.487 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:45.487 #define SPDK_CONFIG_SHARED 1 00:12:45.487 #undef SPDK_CONFIG_SMA 00:12:45.487 #define SPDK_CONFIG_TESTS 1 00:12:45.487 #undef SPDK_CONFIG_TSAN 00:12:45.487 #define SPDK_CONFIG_UBLK 1 00:12:45.487 #define SPDK_CONFIG_UBSAN 1 00:12:45.487 #undef SPDK_CONFIG_UNIT_TESTS 00:12:45.487 #define SPDK_CONFIG_URING 1 00:12:45.487 #define SPDK_CONFIG_URING_PATH 00:12:45.487 #define SPDK_CONFIG_URING_ZNS 1 00:12:45.487 #undef SPDK_CONFIG_USDT 00:12:45.487 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:45.487 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:45.487 #undef SPDK_CONFIG_VFIO_USER 00:12:45.487 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:45.487 #define SPDK_CONFIG_VHOST 1 00:12:45.487 #define SPDK_CONFIG_VIRTIO 1 00:12:45.487 #undef SPDK_CONFIG_VTUNE 00:12:45.487 #define SPDK_CONFIG_VTUNE_DIR 00:12:45.487 #define SPDK_CONFIG_WERROR 1 00:12:45.487 #define SPDK_CONFIG_WPDK_DIR 00:12:45.487 #undef SPDK_CONFIG_XNVME 00:12:45.487 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.487 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # uname -s 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@70 -- # : 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@76 -- # : 1 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@86 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@92 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@120 -- # : 1 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@124 -- # : 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:45.488 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@126 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@138 -- # : 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@140 -- # : true 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@142 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@144 -- # : 1 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@154 -- # : 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@167 -- # : 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@169 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@200 -- # cat 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:12:45.489 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@318 -- # [[ -z 68927 ]] 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@318 -- # kill -0 68927 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.EL8HEF 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem /tmp/spdk.EL8HEF/tests/filesystem /tmp/spdk.EL8HEF 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@327 -- # df -T 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6263181312 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2496167936 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10989568 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13782138880 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5246468096 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13782138880 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5246468096 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267752448 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267895808 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=143360 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt/output 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=96504373248 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3198406656 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:12:45.490 * Looking for test storage... 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:12:45.490 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # target_space=13782138880 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:12:45.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@389 -- # return 0 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@11 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@13 -- # iscsitestinit 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@29 -- # timing_enter start_iscsi_tgt 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:45.491 05:01:59 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:45.491 05:02:00 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@32 -- # pid=68964 00:12:45.491 05:02:00 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@33 -- # echo 'Process pid: 68964' 00:12:45.491 Process pid: 68964 00:12:45.491 05:02:00 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@35 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:12:45.491 05:02:00 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@37 -- # waitforlisten 68964 00:12:45.491 05:02:00 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@829 -- # '[' -z 68964 ']' 00:12:45.491 05:02:00 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.491 05:02:00 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:45.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.492 05:02:00 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.492 05:02:00 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:45.492 05:02:00 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:45.492 05:02:00 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:12:45.751 [2024-07-24 05:02:00.128227] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:12:45.751 [2024-07-24 05:02:00.128400] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68964 ] 00:12:45.751 [2024-07-24 05:02:00.312869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.010 [2024-07-24 05:02:00.537242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.010 [2024-07-24 05:02:00.537389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.010 [2024-07-24 05:02:00.537533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.010 [2024-07-24 05:02:00.537596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.597 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.597 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@862 -- # return 0 00:12:46.597 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@38 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:12:46.597 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.597 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:46.597 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.597 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@39 -- # rpc_cmd framework_start_init 00:12:46.597 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.597 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:46.856 [2024-07-24 05:02:01.260200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:47.422 iscsi_tgt is listening. Running tests... 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@40 -- # echo 'iscsi_tgt is listening. Running tests...' 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@42 -- # timing_exit start_iscsi_tgt 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # get_first_nvme_bdf 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1522 -- # bdfs=() 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1522 -- # local bdfs 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1511 -- # bdfs=() 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1511 -- # local bdfs 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:12:47.422 05:02:01 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:47.422 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # (( 2 == 0 )) 00:12:47.422 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:12:47.422 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # echo 0000:00:10.0 00:12:47.422 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # bdf=0000:00:10.0 00:12:47.422 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@45 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:12:47.422 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.422 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:47.422 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.422 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@46 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:12:47.422 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.422 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:47.422 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.423 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@47 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:00:10.0 00:12:47.423 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.423 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:47.681 Nvme0n1 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # rpc_cmd bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # ls_guid=3eb16f40-1406-44d2-a0fc-eefb82a31d9f 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # get_lvs_free_mb 3eb16f40-1406-44d2-a0fc-eefb82a31d9f 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1362 -- # local lvs_uuid=3eb16f40-1406-44d2-a0fc-eefb82a31d9f 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1363 -- # local lvs_info 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1364 -- # local fc 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1365 -- # local cs 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1366 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:12:47.681 { 00:12:47.681 "uuid": "3eb16f40-1406-44d2-a0fc-eefb82a31d9f", 00:12:47.681 "name": "lvs_0", 00:12:47.681 "base_bdev": "Nvme0n1", 00:12:47.681 "total_data_clusters": 1278, 00:12:47.681 "free_clusters": 1278, 00:12:47.681 "block_size": 4096, 00:12:47.681 "cluster_size": 4194304 00:12:47.681 } 00:12:47.681 ]' 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="3eb16f40-1406-44d2-a0fc-eefb82a31d9f") .free_clusters' 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1367 -- # fc=1278 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="3eb16f40-1406-44d2-a0fc-eefb82a31d9f") .cluster_size' 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # cs=4194304 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1371 -- # free_mb=5112 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1372 -- # echo 5112 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # free_mb=5112 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@52 -- # '[' 5112 -gt 2048 ']' 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@53 -- # rpc_cmd bdev_lvol_create -u 3eb16f40-1406-44d2-a0fc-eefb82a31d9f lbd_0 2048 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:47.681 25ec32b6-f04f-4394-8336-066d4dc8b8cc 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@61 -- # lvol_name=lvs_0/lbd_0 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@62 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias lvs_0/lbd_0:0 1:2 256 -d 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.681 05:02:02 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@63 -- # sleep 1 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@65 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:49.053 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@66 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:12:49.053 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:12:49.053 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@67 -- # waitforiscsidevices 1 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@116 -- # local num=1 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:49.053 [2024-07-24 05:02:03.403080] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # n=1 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@123 -- # return 0 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # get_bdev_size lvs_0/lbd_0 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1376 -- # local bdev_name=lvs_0/lbd_0 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1377 -- # local bdev_info 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1378 -- # local bs 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1379 -- # local nb 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b lvs_0/lbd_0 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:12:49.053 { 00:12:49.053 "name": "25ec32b6-f04f-4394-8336-066d4dc8b8cc", 00:12:49.053 "aliases": [ 00:12:49.053 "lvs_0/lbd_0" 00:12:49.053 ], 00:12:49.053 "product_name": "Logical Volume", 00:12:49.053 "block_size": 4096, 00:12:49.053 "num_blocks": 524288, 00:12:49.053 "uuid": "25ec32b6-f04f-4394-8336-066d4dc8b8cc", 00:12:49.053 "assigned_rate_limits": { 00:12:49.053 "rw_ios_per_sec": 0, 00:12:49.053 "rw_mbytes_per_sec": 0, 00:12:49.053 "r_mbytes_per_sec": 0, 00:12:49.053 "w_mbytes_per_sec": 0 00:12:49.053 }, 00:12:49.053 "claimed": false, 00:12:49.053 "zoned": false, 00:12:49.053 "supported_io_types": { 00:12:49.053 "read": true, 00:12:49.053 "write": true, 00:12:49.053 "unmap": true, 00:12:49.053 "flush": false, 00:12:49.053 "reset": true, 00:12:49.053 "nvme_admin": false, 00:12:49.053 "nvme_io": false, 00:12:49.053 "nvme_io_md": false, 00:12:49.053 "write_zeroes": true, 00:12:49.053 "zcopy": false, 00:12:49.053 "get_zone_info": false, 00:12:49.053 "zone_management": false, 00:12:49.053 "zone_append": false, 00:12:49.053 "compare": false, 00:12:49.053 "compare_and_write": false, 00:12:49.053 "abort": false, 00:12:49.053 "seek_hole": true, 00:12:49.053 "seek_data": true, 00:12:49.053 "copy": false, 00:12:49.053 "nvme_iov_md": false 00:12:49.053 }, 00:12:49.053 "driver_specific": { 00:12:49.053 "lvol": { 00:12:49.053 "lvol_store_uuid": "3eb16f40-1406-44d2-a0fc-eefb82a31d9f", 00:12:49.053 "base_bdev": "Nvme0n1", 00:12:49.053 "thin_provision": false, 00:12:49.053 "num_allocated_clusters": 512, 00:12:49.053 "snapshot": false, 00:12:49.053 "clone": false, 00:12:49.053 "esnap_clone": false 00:12:49.053 } 00:12:49.053 } 00:12:49.053 } 00:12:49.053 ]' 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1381 -- # bs=4096 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # nb=524288 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1385 -- # bdev_size=2048 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1386 -- # echo 2048 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # lvol_size=2147483648 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@70 -- # trap 'iscsicleanup; remove_backends; umount /mnt/device; rm -rf /mnt/device; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@72 -- # mkdir -p /mnt/device 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # grep 'Attached scsi disk' 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # iscsiadm -m session -P 3 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # awk '{print $4}' 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # dev=sda 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@76 -- # waitforfile /dev/sda 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1263 -- # local i=0 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1264 -- # '[' '!' -e /dev/sda ']' 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1270 -- # '[' '!' -e /dev/sda ']' 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1274 -- # return 0 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # sec_size_to_bytes sda 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@76 -- # local dev=sda 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@78 -- # [[ -e /sys/block/sda ]] 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@80 -- # echo 2147483648 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # dev_size=2147483648 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@80 -- # (( lvol_size == dev_size )) 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@81 -- # parted -s /dev/sda mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:49.053 [2024-07-24 05:02:03.579478] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.053 05:02:03 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@82 -- # sleep 1 00:12:49.989 05:02:04 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@144 -- # run_test iscsi_tgt_filesystem_ext4 filesystem_test ext4 00:12:49.989 05:02:04 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:49.989 05:02:04 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.989 05:02:04 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:49.989 ************************************ 00:12:49.989 START TEST iscsi_tgt_filesystem_ext4 00:12:49.989 ************************************ 00:12:49.989 05:02:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1123 -- # filesystem_test ext4 00:12:49.989 05:02:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@89 -- # fstype=ext4 00:12:49.989 05:02:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@91 -- # make_filesystem ext4 /dev/sda1 00:12:49.989 05:02:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:12:49.989 05:02:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:12:49.989 05:02:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:12:49.989 05:02:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:12:49.989 05:02:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:12:49.989 05:02:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:12:49.989 05:02:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda1 00:12:49.989 mke2fs 1.46.5 (30-Dec-2021) 00:12:50.248 Discarding device blocks: 0/522240 done 00:12:50.248 Creating filesystem with 522240 4k blocks and 130560 inodes 00:12:50.248 Filesystem UUID: b090bd17-2fbe-4a9c-98bb-74e97cc78833 00:12:50.248 Superblock backups stored on blocks: 00:12:50.248 32768, 98304, 163840, 229376, 294912 00:12:50.248 00:12:50.248 Allocating group tables: 0/16 done 00:12:50.248 Writing inode tables: 0/16 done 00:12:50.248 Creating journal (8192 blocks): done 00:12:50.248 Writing superblocks and filesystem accounting information: 0/16 done 00:12:50.248 00:12:50.248 05:02:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:12:50.248 05:02:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:12:50.506 05:02:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:12:50.506 05:02:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:12:50.506 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:12:50.506 fio-3.35 00:12:50.506 Starting 1 thread 00:12:50.506 job0: Laying out IO file (1 file / 1024MiB) 00:13:05.382 00:13:05.382 job0: (groupid=0, jobs=1): err= 0: pid=69126: Wed Jul 24 05:02:19 2024 00:13:05.382 write: IOPS=18.5k, BW=72.4MiB/s (75.9MB/s)(1024MiB/14149msec); 0 zone resets 00:13:05.382 slat (usec): min=5, max=39395, avg=18.47, stdev=176.21 00:13:05.382 clat (usec): min=1144, max=45568, avg=3434.36, stdev=1774.24 00:13:05.382 lat (usec): min=1158, max=45581, avg=3452.83, stdev=1788.15 00:13:05.382 clat percentiles (usec): 00:13:05.382 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2507], 00:13:05.382 | 30.00th=[ 2671], 40.00th=[ 3163], 50.00th=[ 3294], 60.00th=[ 3425], 00:13:05.382 | 70.00th=[ 3851], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4555], 00:13:05.382 | 99.00th=[ 5407], 99.50th=[ 7504], 99.90th=[26346], 99.95th=[38536], 00:13:05.382 | 99.99th=[43779] 00:13:05.382 bw ( KiB/s): min=65032, max=81240, per=99.93%, avg=74054.50, stdev=4414.68, samples=28 00:13:05.382 iops : min=16258, max=20310, avg=18513.61, stdev=1103.66, samples=28 00:13:05.382 lat (msec) : 2=0.76%, 4=72.97%, 10=25.82%, 20=0.04%, 50=0.42% 00:13:05.382 cpu : usr=4.97%, sys=22.35%, ctx=17358, majf=0, minf=1 00:13:05.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:13:05.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:05.382 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.382 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:05.382 00:13:05.382 Run status group 0 (all jobs): 00:13:05.382 WRITE: bw=72.4MiB/s (75.9MB/s), 72.4MiB/s-72.4MiB/s (75.9MB/s-75.9MB/s), io=1024MiB (1074MB), run=14149-14149msec 00:13:05.382 00:13:05.382 Disk stats (read/write): 00:13:05.382 sda: ios=0/260598, merge=0/2469, ticks=0/793826, in_queue=793826, util=99.27% 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:13:05.382 Logging out of session [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:13:05.382 Logout of [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=0 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:13:05.382 iscsiadm: No active sessions. 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # true 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=0 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:13:05.382 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:13:05.382 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=1 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:13:05.382 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:13:05.383 [2024-07-24 05:02:19.496620] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=1 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # dev=sda 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1263 -- # local i=0 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1264 -- # '[' '!' -e /dev/sda1 ']' 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1270 -- # '[' '!' -e /dev/sda1 ']' 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1274 -- # return 0 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:13:05.383 File existed. 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:13:05.383 05:02:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:13:05.383 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:13:05.383 fio-3.35 00:13:05.383 Starting 1 thread 00:13:27.314 00:13:27.314 job0: (groupid=0, jobs=1): err= 0: pid=69386: Wed Jul 24 05:02:39 2024 00:13:27.314 read: IOPS=18.6k, BW=72.8MiB/s (76.3MB/s)(1455MiB/20003msec) 00:13:27.314 slat (usec): min=4, max=4130, avg= 9.01, stdev=37.49 00:13:27.314 clat (usec): min=853, max=23451, avg=3423.75, stdev=1014.75 00:13:27.314 lat (usec): min=861, max=25205, avg=3432.75, stdev=1020.74 00:13:27.314 clat percentiles (usec): 00:13:27.314 | 1.00th=[ 2278], 5.00th=[ 2376], 10.00th=[ 2409], 20.00th=[ 2507], 00:13:27.314 | 30.00th=[ 2606], 40.00th=[ 3195], 50.00th=[ 3294], 60.00th=[ 3392], 00:13:27.314 | 70.00th=[ 4015], 80.00th=[ 4146], 90.00th=[ 4293], 95.00th=[ 4948], 00:13:27.314 | 99.00th=[ 6587], 99.50th=[ 7111], 99.90th=[12911], 99.95th=[17957], 00:13:27.314 | 99.99th=[20055] 00:13:27.314 bw ( KiB/s): min=45872, max=85824, per=100.00%, avg=74556.10, stdev=8270.27, samples=39 00:13:27.314 iops : min=11468, max=21456, avg=18639.03, stdev=2067.57, samples=39 00:13:27.314 lat (usec) : 1000=0.01% 00:13:27.314 lat (msec) : 2=0.36%, 4=69.21%, 10=30.27%, 20=0.15%, 50=0.01% 00:13:27.314 cpu : usr=5.80%, sys=14.66%, ctx=23542, majf=0, minf=65 00:13:27.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:13:27.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:27.314 issued rwts: total=372581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.314 00:13:27.314 Run status group 0 (all jobs): 00:13:27.314 READ: bw=72.8MiB/s (76.3MB/s), 72.8MiB/s-72.8MiB/s (76.3MB/s-76.3MB/s), io=1455MiB (1526MB), run=20003-20003msec 00:13:27.314 00:13:27.314 Disk stats (read/write): 00:13:27.314 sda: ios=369676/5, merge=1336/2, ticks=1191573/5, in_queue=1191579, util=99.60% 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:13:27.314 ************************************ 00:13:27.314 END TEST iscsi_tgt_filesystem_ext4 00:13:27.314 ************************************ 00:13:27.314 00:13:27.314 real 0m35.250s 00:13:27.314 user 0m2.110s 00:13:27.314 sys 0m6.357s 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@145 -- # run_test iscsi_tgt_filesystem_btrfs filesystem_test btrfs 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:27.314 ************************************ 00:13:27.314 START TEST iscsi_tgt_filesystem_btrfs 00:13:27.314 ************************************ 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1123 -- # filesystem_test btrfs 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@89 -- # fstype=btrfs 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@91 -- # make_filesystem btrfs /dev/sda1 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:13:27.314 05:02:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/sda1 00:13:27.314 btrfs-progs v6.6.2 00:13:27.314 See https://btrfs.readthedocs.io for more information. 00:13:27.314 00:13:27.314 Performing full device TRIM /dev/sda1 (1.99GiB) ... 00:13:27.314 NOTE: several default settings have changed in version 5.15, please make sure 00:13:27.314 this does not affect your deployments: 00:13:27.314 - DUP for metadata (-m dup) 00:13:27.314 - enabled no-holes (-O no-holes) 00:13:27.314 - enabled free-space-tree (-R free-space-tree) 00:13:27.314 00:13:27.314 Label: (null) 00:13:27.314 UUID: 0a57139e-0122-4c14-a413-a16cf5c36a99 00:13:27.314 Node size: 16384 00:13:27.314 Sector size: 4096 00:13:27.314 Filesystem size: 1.99GiB 00:13:27.314 Block group profiles: 00:13:27.314 Data: single 8.00MiB 00:13:27.314 Metadata: DUP 102.00MiB 00:13:27.314 System: DUP 8.00MiB 00:13:27.314 SSD detected: yes 00:13:27.314 Zoned device: no 00:13:27.314 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:27.314 Runtime features: free-space-tree 00:13:27.314 Checksum: crc32c 00:13:27.314 Number of devices: 1 00:13:27.314 Devices: 00:13:27.314 ID SIZE PATH 00:13:27.314 1 1.99GiB /dev/sda1 00:13:27.314 00:13:27.314 05:02:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:13:27.314 05:02:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:13:27.314 05:02:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:13:27.314 05:02:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:13:27.314 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:13:27.314 fio-3.35 00:13:27.314 Starting 1 thread 00:13:27.314 job0: Laying out IO file (1 file / 1024MiB) 00:13:42.192 00:13:42.192 job0: (groupid=0, jobs=1): err= 0: pid=69641: Wed Jul 24 05:02:55 2024 00:13:42.192 write: IOPS=17.2k, BW=67.3MiB/s (70.6MB/s)(1024MiB/15207msec); 0 zone resets 00:13:42.192 slat (usec): min=9, max=4404, avg=44.33, stdev=98.32 00:13:42.192 clat (usec): min=801, max=14583, avg=3666.38, stdev=1350.73 00:13:42.192 lat (usec): min=1039, max=14914, avg=3710.70, stdev=1364.38 00:13:42.192 clat percentiles (usec): 00:13:42.192 | 1.00th=[ 1827], 5.00th=[ 2114], 10.00th=[ 2311], 20.00th=[ 2507], 00:13:42.192 | 30.00th=[ 2835], 40.00th=[ 3130], 50.00th=[ 3359], 60.00th=[ 3687], 00:13:42.192 | 70.00th=[ 4047], 80.00th=[ 4555], 90.00th=[ 5473], 95.00th=[ 6325], 00:13:42.192 | 99.00th=[ 8160], 99.50th=[ 8848], 99.90th=[10290], 99.95th=[10945], 00:13:42.192 | 99.99th=[12518] 00:13:42.192 bw ( KiB/s): min=56656, max=78128, per=99.86%, avg=68854.83, stdev=5321.61, samples=30 00:13:42.192 iops : min=14164, max=19532, avg=17213.70, stdev=1330.40, samples=30 00:13:42.192 lat (usec) : 1000=0.01% 00:13:42.192 lat (msec) : 2=3.05%, 4=64.78%, 10=32.02%, 20=0.14% 00:13:42.192 cpu : usr=5.88%, sys=37.84%, ctx=59651, majf=0, minf=1 00:13:42.192 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:13:42.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:42.192 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.192 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.192 00:13:42.192 Run status group 0 (all jobs): 00:13:42.192 WRITE: bw=67.3MiB/s (70.6MB/s), 67.3MiB/s-67.3MiB/s (70.6MB/s-70.6MB/s), io=1024MiB (1074MB), run=15207-15207msec 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:13:42.192 Logging out of session [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:13:42.192 Logout of [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:13:42.192 iscsiadm: No active sessions. 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # true 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=0 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:13:42.192 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:13:42.192 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:13:42.192 [2024-07-24 05:02:55.783144] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=1 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # dev=sda 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1263 -- # local i=0 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1264 -- # '[' '!' -e /dev/sda1 ']' 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1270 -- # '[' '!' -e /dev/sda1 ']' 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1274 -- # return 0 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:13:42.192 File existed. 00:13:42.192 05:02:55 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:13:42.192 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:13:42.192 fio-3.35 00:13:42.192 Starting 1 thread 00:14:04.137 00:14:04.137 job0: (groupid=0, jobs=1): err= 0: pid=69863: Wed Jul 24 05:03:16 2024 00:14:04.137 read: IOPS=19.1k, BW=74.8MiB/s (78.4MB/s)(1496MiB/20003msec) 00:14:04.137 slat (usec): min=4, max=2343, avg= 9.15, stdev=16.42 00:14:04.137 clat (usec): min=1064, max=25489, avg=3330.60, stdev=841.64 00:14:04.137 lat (usec): min=1074, max=26349, avg=3339.75, stdev=845.30 00:14:04.137 clat percentiles (usec): 00:14:04.137 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2507], 00:14:04.137 | 30.00th=[ 2638], 40.00th=[ 3163], 50.00th=[ 3261], 60.00th=[ 3392], 00:14:04.137 | 70.00th=[ 3916], 80.00th=[ 4113], 90.00th=[ 4228], 95.00th=[ 4490], 00:14:04.137 | 99.00th=[ 5276], 99.50th=[ 5473], 99.90th=[ 7832], 99.95th=[10945], 00:14:04.137 | 99.99th=[20317] 00:14:04.137 bw ( KiB/s): min=59272, max=81856, per=100.00%, avg=76646.21, stdev=3704.81, samples=39 00:14:04.137 iops : min=14818, max=20464, avg=19161.54, stdev=926.20, samples=39 00:14:04.137 lat (msec) : 2=0.19%, 4=73.21%, 10=26.55%, 20=0.04%, 50=0.01% 00:14:04.137 cpu : usr=4.79%, sys=16.88%, ctx=41407, majf=0, minf=65 00:14:04.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:04.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:04.137 issued rwts: total=382956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:04.137 00:14:04.137 Run status group 0 (all jobs): 00:14:04.137 READ: bw=74.8MiB/s (78.4MB/s), 74.8MiB/s-74.8MiB/s (78.4MB/s-78.4MB/s), io=1496MiB (1569MB), run=20003-20003msec 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:14:04.137 ************************************ 00:14:04.137 END TEST iscsi_tgt_filesystem_btrfs 00:14:04.137 ************************************ 00:14:04.137 00:14:04.137 real 0m36.249s 00:14:04.137 user 0m2.102s 00:14:04.137 sys 0m9.491s 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@146 -- # run_test iscsi_tgt_filesystem_xfs filesystem_test xfs 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:04.137 ************************************ 00:14:04.137 START TEST iscsi_tgt_filesystem_xfs 00:14:04.137 ************************************ 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1123 -- # filesystem_test xfs 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@89 -- # fstype=xfs 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@91 -- # make_filesystem xfs /dev/sda1 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/sda1 00:14:04.137 meta-data=/dev/sda1 isize=512 agcount=4, agsize=130560 blks 00:14:04.137 = sectsz=4096 attr=2, projid32bit=1 00:14:04.137 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:04.137 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:04.137 data = bsize=4096 blocks=522240, imaxpct=25 00:14:04.137 = sunit=0 swidth=0 blks 00:14:04.137 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:04.137 log =internal log bsize=4096 blocks=16384, version=2 00:14:04.137 = sectsz=4096 sunit=1 blks, lazy-count=1 00:14:04.137 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:04.137 Discarding blocks...Done. 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:14:04.137 05:03:16 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:14:04.137 05:03:17 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:14:04.137 05:03:17 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:14:04.137 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:14:04.137 fio-3.35 00:14:04.137 Starting 1 thread 00:14:04.137 job0: Laying out IO file (1 file / 1024MiB) 00:14:19.027 00:14:19.027 job0: (groupid=0, jobs=1): err= 0: pid=70121: Wed Jul 24 05:03:31 2024 00:14:19.027 write: IOPS=19.1k, BW=74.6MiB/s (78.3MB/s)(1024MiB/13720msec); 0 zone resets 00:14:19.027 slat (usec): min=3, max=2614, avg=17.87, stdev=92.73 00:14:19.027 clat (usec): min=1268, max=9914, avg=3330.54, stdev=777.98 00:14:19.027 lat (usec): min=1302, max=9921, avg=3348.42, stdev=782.17 00:14:19.027 clat percentiles (usec): 00:14:19.027 | 1.00th=[ 2057], 5.00th=[ 2376], 10.00th=[ 2409], 20.00th=[ 2507], 00:14:19.027 | 30.00th=[ 2671], 40.00th=[ 3195], 50.00th=[ 3294], 60.00th=[ 3392], 00:14:19.027 | 70.00th=[ 3785], 80.00th=[ 4080], 90.00th=[ 4293], 95.00th=[ 4621], 00:14:19.027 | 99.00th=[ 5276], 99.50th=[ 5669], 99.90th=[ 6063], 99.95th=[ 6718], 00:14:19.027 | 99.99th=[ 8029] 00:14:19.028 bw ( KiB/s): min=67856, max=79096, per=99.92%, avg=76364.04, stdev=1994.04, samples=27 00:14:19.028 iops : min=16964, max=19774, avg=19091.00, stdev=498.51, samples=27 00:14:19.028 lat (msec) : 2=0.82%, 4=73.38%, 10=25.80% 00:14:19.028 cpu : usr=5.24%, sys=12.52%, ctx=17320, majf=0, minf=1 00:14:19.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:19.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:19.028 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.028 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:19.028 00:14:19.028 Run status group 0 (all jobs): 00:14:19.028 WRITE: bw=74.6MiB/s (78.3MB/s), 74.6MiB/s-74.6MiB/s (78.3MB/s-78.3MB/s), io=1024MiB (1074MB), run=13720-13720msec 00:14:19.028 00:14:19.028 Disk stats (read/write): 00:14:19.028 sda: ios=0/259450, merge=0/836, ticks=0/765472, in_queue=765473, util=99.37% 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:14:19.028 Logging out of session [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:14:19.028 Logout of [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:14:19.028 iscsiadm: No active sessions. 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # true 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=0 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:14:19.028 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:14:19.028 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:14:19.028 [2024-07-24 05:03:31.430619] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=1 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # dev=sda 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1263 -- # local i=0 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1264 -- # '[' '!' -e /dev/sda1 ']' 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1270 -- # '[' '!' -e /dev/sda1 ']' 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1274 -- # return 0 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:14:19.028 File existed. 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:14:19.028 05:03:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:14:19.028 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:14:19.028 fio-3.35 00:14:19.028 Starting 1 thread 00:14:40.944 00:14:40.944 job0: (groupid=0, jobs=1): err= 0: pid=70321: Wed Jul 24 05:03:51 2024 00:14:40.944 read: IOPS=17.7k, BW=69.3MiB/s (72.7MB/s)(1387MiB/20003msec) 00:14:40.944 slat (usec): min=3, max=365, avg= 7.54, stdev= 6.85 00:14:40.944 clat (usec): min=1379, max=15651, avg=3597.69, stdev=833.67 00:14:40.944 lat (usec): min=1400, max=15658, avg=3605.22, stdev=833.41 00:14:40.944 clat percentiles (usec): 00:14:40.944 | 1.00th=[ 2343], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2704], 00:14:40.944 | 30.00th=[ 2835], 40.00th=[ 3392], 50.00th=[ 3589], 60.00th=[ 3687], 00:14:40.944 | 70.00th=[ 4080], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4817], 00:14:40.944 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 6652], 99.95th=[ 7439], 00:14:40.944 | 99.99th=[10421] 00:14:40.944 bw ( KiB/s): min=63896, max=78504, per=100.00%, avg=71101.33, stdev=2937.82, samples=39 00:14:40.944 iops : min=15974, max=19626, avg=17775.33, stdev=734.45, samples=39 00:14:40.944 lat (msec) : 2=0.16%, 4=67.73%, 10=32.10%, 20=0.01% 00:14:40.944 cpu : usr=4.98%, sys=13.02%, ctx=22621, majf=0, minf=65 00:14:40.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:40.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:40.944 issued rwts: total=355005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.944 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:40.944 00:14:40.944 Run status group 0 (all jobs): 00:14:40.944 READ: bw=69.3MiB/s (72.7MB/s), 69.3MiB/s-69.3MiB/s (72.7MB/s-72.7MB/s), io=1387MiB (1454MB), run=20003-20003msec 00:14:40.944 00:14:40.944 Disk stats (read/write): 00:14:40.944 sda: ios=351647/0, merge=1326/0, ticks=1202267/0, in_queue=1202267, util=99.60% 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:14:40.944 00:14:40.944 real 0m35.649s 00:14:40.944 user 0m1.954s 00:14:40.944 sys 0m4.549s 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:40.944 ************************************ 00:14:40.944 END TEST iscsi_tgt_filesystem_xfs 00:14:40.944 ************************************ 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@148 -- # rm -rf /mnt/device 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@152 -- # iscsicleanup 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:14:40.944 Cleaning up iSCSI connection 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:14:40.944 Logging out of session [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:14:40.944 Logout of [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@983 -- # rm -rf 00:14:40.944 INFO: Removing lvol bdev 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@153 -- # remove_backends 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@17 -- # echo 'INFO: Removing lvol bdev' 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@18 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:40.944 [2024-07-24 05:03:51.981191] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (25ec32b6-f04f-4394-8336-066d4dc8b8cc) received event(SPDK_BDEV_EVENT_REMOVE) 00:14:40.944 INFO: Removing lvol stores 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@20 -- # echo 'INFO: Removing lvol stores' 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@21 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:40.944 INFO: Removing NVMe 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@23 -- # echo 'INFO: Removing NVMe' 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@24 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.944 05:03:51 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:40.944 05:03:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.944 05:03:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@26 -- # return 0 00:14:40.944 05:03:52 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@154 -- # killprocess 68964 00:14:40.944 05:03:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@948 -- # '[' -z 68964 ']' 00:14:40.944 05:03:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@952 -- # kill -0 68964 00:14:40.944 05:03:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@953 -- # uname 00:14:40.944 05:03:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:40.944 05:03:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68964 00:14:40.944 05:03:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:40.944 05:03:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:40.945 killing process with pid 68964 00:14:40.945 05:03:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68964' 00:14:40.945 05:03:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@967 -- # kill 68964 00:14:40.945 05:03:52 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@972 -- # wait 68964 00:14:40.945 05:03:54 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@155 -- # iscsitestfini 00:14:40.945 05:03:54 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:14:40.945 00:14:40.945 real 1m54.694s 00:14:40.945 user 7m19.520s 00:14:40.945 sys 0m33.167s 00:14:40.945 ************************************ 00:14:40.945 END TEST iscsi_tgt_filesystem 00:14:40.945 ************************************ 00:14:40.945 05:03:54 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:40.945 05:03:54 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:40.945 05:03:54 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@32 -- # run_test chap_during_discovery /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:14:40.945 05:03:54 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:40.945 05:03:54 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.945 05:03:54 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:14:40.945 ************************************ 00:14:40.945 START TEST chap_during_discovery 00:14:40.945 ************************************ 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:14:40.945 * Looking for test storage... 00:14:40.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@13 -- # USER=chapo 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@14 -- # MUSER=mchapo 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@15 -- # PASS=123456789123 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@16 -- # MPASS=321978654321 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@19 -- # iscsitestinit 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@21 -- # set_up_iscsi_target 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:40.945 iSCSI target launched. pid: 70629 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@142 -- # pid=70629 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 70629' 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:14:40.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@145 -- # waitforlisten 70629 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@829 -- # '[' -z 70629 ']' 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.945 05:03:54 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:40.945 [2024-07-24 05:03:54.755460] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:14:40.945 [2024-07-24 05:03:54.755658] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70629 ] 00:14:40.945 [2024-07-24 05:03:55.063413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.945 [2024-07-24 05:03:55.262779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.945 05:03:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.945 05:03:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@862 -- # return 0 00:14:40.945 05:03:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:14:40.945 05:03:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.945 05:03:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:40.945 05:03:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.945 05:03:55 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:14:40.945 05:03:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.945 05:03:55 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.203 [2024-07-24 05:03:55.686669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:41.768 iscsi_tgt is listening. Running tests... 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.768 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.026 Malloc0 00:14:42.026 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.026 05:03:56 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:14:42.026 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.026 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.026 05:03:56 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.026 05:03:56 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@155 -- # sleep 1 00:14:42.959 configuring target for bideerctional authentication 00:14:42.959 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@24 -- # echo 'configuring target for bideerctional authentication' 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@95 -- # '[' 0 -eq 1 ']' 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 1 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.960 executing discovery without adding credential to initiator - we expect failure 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@27 -- # rc=0 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:14:42.960 iscsiadm: Login failed to authenticate with target 00:14:42.960 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:14:42.960 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # rc=24 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@29 -- # '[' 24 -eq 0 ']' 00:14:42.960 configuring initiator for bideerctional authentication 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@35 -- # echo 'configuring initiator for bideerctional authentication' 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@36 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:14:42.960 iscsiadm: No matching sessions found 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:14:42.960 iscsiadm: No records found 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # true 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:14:42.960 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:14:42.961 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:14:42.961 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:14:42.961 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:14:42.961 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:14:42.961 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:14:42.961 05:03:57 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:14:46.242 05:04:00 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:14:46.242 05:04:00 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:14:47.173 05:04:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:47.173 05:04:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@116 -- # '[' 0 -eq 1 ']' 00:14:47.173 05:04:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:14:47.173 05:04:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:14:47.173 05:04:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:14:47.174 05:04:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:14:47.174 05:04:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:14:47.174 05:04:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:14:47.174 05:04:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:14:47.174 05:04:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:14:47.174 05:04:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:14:47.174 05:04:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@135 -- # restart_iscsid 00:14:47.174 05:04:01 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:14:50.450 05:04:04 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:14:50.450 05:04:04 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:14:51.383 executing discovery with adding credential to initiator 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@37 -- # echo 'executing discovery with adding credential to initiator' 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@38 -- # rc=0 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@39 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:14:51.383 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@40 -- # '[' 0 -ne 0 ']' 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@44 -- # echo DONE 00:14:51.383 DONE 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@45 -- # default_initiator_chap_credentials 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:14:51.383 iscsiadm: No matching sessions found 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:14:51.383 05:04:05 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:14:54.665 05:04:08 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:14:54.665 05:04:08 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:14:55.599 05:04:09 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:55.599 05:04:09 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:14:55.599 05:04:09 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@49 -- # killprocess 70629 00:14:55.599 05:04:09 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@948 -- # '[' -z 70629 ']' 00:14:55.599 05:04:09 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@952 -- # kill -0 70629 00:14:55.600 05:04:09 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@953 -- # uname 00:14:55.600 05:04:09 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.600 05:04:09 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70629 00:14:55.600 killing process with pid 70629 00:14:55.600 05:04:09 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:55.600 05:04:09 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:55.600 05:04:09 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70629' 00:14:55.600 05:04:09 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@967 -- # kill 70629 00:14:55.600 05:04:09 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@972 -- # wait 70629 00:14:58.130 05:04:12 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@51 -- # iscsitestfini 00:14:58.130 05:04:12 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:14:58.130 00:14:58.130 real 0m18.026s 00:14:58.130 user 0m17.765s 00:14:58.130 sys 0m0.826s 00:14:58.130 ************************************ 00:14:58.130 END TEST chap_during_discovery 00:14:58.130 ************************************ 00:14:58.130 05:04:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.130 05:04:12 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:58.130 05:04:12 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@33 -- # run_test chap_mutual_auth /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:14:58.130 05:04:12 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:58.130 05:04:12 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.130 05:04:12 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:14:58.130 ************************************ 00:14:58.130 START TEST chap_mutual_auth 00:14:58.130 ************************************ 00:14:58.130 05:04:12 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:14:58.130 * Looking for test storage... 00:14:58.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:14:58.130 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:14:58.130 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@13 -- # USER=chapo 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@14 -- # MUSER=mchapo 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@15 -- # PASS=123456789123 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@16 -- # MPASS=321978654321 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@19 -- # iscsitestinit 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@21 -- # set_up_iscsi_target 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@142 -- # pid=70929 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 70929' 00:14:58.131 iSCSI target launched. pid: 70929 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@145 -- # waitforlisten 70929 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@829 -- # '[' -z 70929 ']' 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.131 05:04:12 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:14:58.390 [2024-07-24 05:04:12.868852] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:14:58.390 [2024-07-24 05:04:12.869026] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70929 ] 00:14:58.648 [2024-07-24 05:04:13.176510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.911 [2024-07-24 05:04:13.381205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.170 05:04:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.170 05:04:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@862 -- # return 0 00:14:59.170 05:04:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:14:59.170 05:04:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.170 05:04:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:14:59.170 05:04:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.170 05:04:13 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:14:59.170 05:04:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.170 05:04:13 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:14:59.428 [2024-07-24 05:04:13.892994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.991 iscsi_tgt is listening. Running tests... 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.991 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:15:00.247 Malloc0 00:15:00.247 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.247 05:04:14 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:15:00.247 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.247 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:15:00.247 05:04:14 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.247 05:04:14 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@155 -- # sleep 1 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:15:01.179 configuring target for authentication 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@24 -- # echo 'configuring target for authentication' 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:15:01.179 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 0 -eq 1 ']' 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@99 -- # rpc_cmd iscsi_target_node_set_auth -g 1 -r iqn.2016-06.io.spdk:disk1 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 0 -eq 1 ']' 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@106 -- # rpc_cmd iscsi_set_discovery_auth -r -g 1 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:15:01.180 executing discovery without adding credential to initiator - we expect failure 00:15:01.180 configuring initiator with biderectional authentication 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@28 -- # echo 'configuring initiator with biderectional authentication' 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@29 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:15:01.180 iscsiadm: No matching sessions found 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # true 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:15:01.180 iscsiadm: No records found 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # true 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:15:01.180 05:04:15 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:15:04.563 05:04:18 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:15:04.563 05:04:18 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@116 -- # '[' 1 -eq 1 ']' 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@117 -- # sed -i 's/#node.session.auth.authmethod = CHAP/node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@118 -- # sed -i 's/#node.session.auth.username =.*/node.session.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@119 -- # sed -i 's/#node.session.auth.password =.*/node.session.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' 1 -eq 1 ']' 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n 321978654321 ']' 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n mchapo ']' 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@121 -- # sed -i 's/#node.session.auth.username_in =.*/node.session.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@122 -- # sed -i 's/#node.session.auth.password_in =.*/node.session.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@135 -- # restart_iscsid 00:15:05.498 05:04:19 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:15:08.778 05:04:22 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:15:08.778 05:04:22 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:15:09.342 executing discovery - target should not be discovered since the -m option was not used 00:15:09.342 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:15:09.342 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@30 -- # echo 'executing discovery - target should not be discovered since the -m option was not used' 00:15:09.342 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@31 -- # rc=0 00:15:09.342 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:15:09.600 [2024-07-24 05:04:23.978720] iscsi.c: 982:iscsi_auth_params: *ERROR*: Initiator wants to use mutual CHAP for security, but it's not enabled. 00:15:09.600 [2024-07-24 05:04:23.978776] iscsi.c:1957:iscsi_op_login_rsp_handle_csg_bit: *ERROR*: iscsi_auth_params() failed 00:15:09.600 iscsiadm: Login failed to authenticate with target 00:15:09.600 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:15:09.600 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:15:09.600 configuring target for authentication with the -m option 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # rc=24 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@33 -- # '[' 24 -eq 0 ']' 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@37 -- # echo 'configuring target for authentication with the -m option' 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@38 -- # config_chap_credentials_for_target -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=2 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:09.600 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 2 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.601 05:04:23 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 2 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 1 -eq 1 ']' 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@97 -- # rpc_cmd iscsi_target_node_set_auth -g 2 -r -m iqn.2016-06.io.spdk:disk1 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 2 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:15:09.601 executing discovery: 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@39 -- # echo 'executing discovery:' 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@40 -- # rc=0 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@41 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:15:09.601 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:15:09.601 executing login: 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@42 -- # '[' 0 -ne 0 ']' 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@46 -- # echo 'executing login:' 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@47 -- # rc=0 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@48 -- # iscsiadm -m node -l -p 10.0.0.1:3260 00:15:09.601 Logging in to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:15:09.601 Login to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:15:09.601 DONE 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@49 -- # '[' 0 -ne 0 ']' 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@54 -- # echo DONE 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@55 -- # default_initiator_chap_credentials 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:15:09.601 [2024-07-24 05:04:24.101708] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:09.601 Logging out of session [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:15:09.601 Logout of [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:15:09.601 05:04:24 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:15:12.881 05:04:27 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:15:12.881 05:04:27 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:15:13.815 05:04:28 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:13.815 05:04:28 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@57 -- # trap - SIGINT SIGTERM EXIT 00:15:13.815 05:04:28 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@59 -- # killprocess 70929 00:15:13.815 05:04:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@948 -- # '[' -z 70929 ']' 00:15:13.815 05:04:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@952 -- # kill -0 70929 00:15:13.815 05:04:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@953 -- # uname 00:15:13.815 05:04:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.815 05:04:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70929 00:15:13.815 05:04:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:13.815 05:04:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:13.815 killing process with pid 70929 00:15:13.815 05:04:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70929' 00:15:13.815 05:04:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@967 -- # kill 70929 00:15:13.815 05:04:28 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@972 -- # wait 70929 00:15:16.346 05:04:30 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@61 -- # iscsitestfini 00:15:16.346 05:04:30 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:15:16.346 00:15:16.346 real 0m18.338s 00:15:16.346 user 0m18.203s 00:15:16.346 sys 0m0.869s 00:15:16.346 05:04:30 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:16.346 05:04:30 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:15:16.346 ************************************ 00:15:16.346 END TEST chap_mutual_auth 00:15:16.346 ************************************ 00:15:16.605 05:04:30 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@34 -- # run_test iscsi_tgt_reset /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:15:16.605 05:04:30 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:16.605 05:04:30 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.605 05:04:30 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:15:16.605 ************************************ 00:15:16.605 START TEST iscsi_tgt_reset 00:15:16.605 ************************************ 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:15:16.605 * Looking for test storage... 00:15:16.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@11 -- # iscsitestinit 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@18 -- # hash sg_reset 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@22 -- # timing_enter start_iscsi_tgt 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@25 -- # pid=71252 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@26 -- # echo 'Process pid: 71252' 00:15:16.605 Process pid: 71252 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@28 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@30 -- # waitforlisten 71252 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@829 -- # '[' -z 71252 ']' 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.605 05:04:31 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:15:16.605 [2024-07-24 05:04:31.230597] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:15:16.605 [2024-07-24 05:04:31.230760] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71252 ] 00:15:16.864 [2024-07-24 05:04:31.420442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.123 [2024-07-24 05:04:31.740220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.692 05:04:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.692 05:04:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@862 -- # return 0 00:15:17.692 05:04:32 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@31 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:15:17.692 05:04:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.692 05:04:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:15:17.692 05:04:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.692 05:04:32 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@32 -- # rpc_cmd framework_start_init 00:15:17.692 05:04:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.692 05:04:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:15:17.951 [2024-07-24 05:04:32.333016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:18.519 05:04:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.519 05:04:32 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@33 -- # echo 'iscsi_tgt is listening. Running tests...' 00:15:18.519 iscsi_tgt is listening. Running tests... 00:15:18.519 05:04:32 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@35 -- # timing_exit start_iscsi_tgt 00:15:18.519 05:04:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:18.519 05:04:32 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@37 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@38 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@39 -- # rpc_cmd bdev_malloc_create 64 512 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:15:18.519 Malloc0 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@44 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.519 05:04:33 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@45 -- # sleep 1 00:15:19.917 05:04:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@47 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:15:19.917 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:15:19.917 05:04:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@48 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:15:19.917 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:15:19.917 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:15:19.917 05:04:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@49 -- # waitforiscsidevices 1 00:15:19.917 05:04:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@116 -- # local num=1 00:15:19.917 05:04:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:15:19.917 05:04:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:15:19.917 05:04:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:15:19.917 05:04:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:15:19.917 [2024-07-24 05:04:34.203819] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:19.917 05:04:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # n=1 00:15:19.918 05:04:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:15:19.918 05:04:34 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@123 -- # return 0 00:15:19.918 05:04:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # iscsiadm -m session -P 3 00:15:19.918 05:04:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # awk '{print $4}' 00:15:19.918 05:04:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # grep 'Attached scsi disk' 00:15:19.918 05:04:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # dev=sda 00:15:19.918 05:04:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@54 -- # fiopid=71325 00:15:19.918 05:04:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 60 00:15:19.918 FIO pid: 71325 00:15:19.918 05:04:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@55 -- # echo 'FIO pid: 71325' 00:15:19.918 05:04:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@57 -- # trap 'iscsicleanup; killprocess $pid; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:15:19.918 05:04:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:15:19.918 05:04:34 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:15:19.918 [global] 00:15:19.918 thread=1 00:15:19.918 invalidate=1 00:15:19.918 rw=read 00:15:19.918 time_based=1 00:15:19.918 runtime=60 00:15:19.918 ioengine=libaio 00:15:19.918 direct=1 00:15:19.918 bs=512 00:15:19.918 iodepth=1 00:15:19.918 norandommap=1 00:15:19.918 numjobs=1 00:15:19.918 00:15:19.918 [job0] 00:15:19.918 filename=/dev/sda 00:15:19.918 queue_depth set to 113 (sda) 00:15:19.918 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:15:19.918 fio-3.35 00:15:19.918 Starting 1 thread 00:15:20.856 05:04:35 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 71252 00:15:20.856 05:04:35 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 71325 00:15:20.856 05:04:35 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:15:20.856 [2024-07-24 05:04:35.234165] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:15:20.856 [2024-07-24 05:04:35.234264] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:15:20.856 05:04:35 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:15:20.856 [2024-07-24 05:04:35.236436] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:21.788 05:04:36 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 71252 00:15:21.789 05:04:36 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 71325 00:15:21.789 05:04:36 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:15:21.789 05:04:36 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:15:22.722 05:04:37 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 71252 00:15:22.722 05:04:37 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 71325 00:15:22.722 05:04:37 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:15:22.722 [2024-07-24 05:04:37.248664] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:15:22.722 [2024-07-24 05:04:37.248767] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:15:22.722 05:04:37 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:15:22.722 [2024-07-24 05:04:37.250410] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:23.656 05:04:38 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 71252 00:15:23.656 05:04:38 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 71325 00:15:23.656 05:04:38 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:15:23.656 05:04:38 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:15:25.031 05:04:39 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 71252 00:15:25.031 05:04:39 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 71325 00:15:25.031 05:04:39 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:15:25.031 [2024-07-24 05:04:39.262477] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:15:25.031 [2024-07-24 05:04:39.262563] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:15:25.031 05:04:39 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:15:25.031 [2024-07-24 05:04:39.263709] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:25.967 Cleaning up iSCSI connection 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 71252 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 71325 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@70 -- # kill 71325 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # wait 71325 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # true 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@73 -- # trap - SIGINT SIGTERM EXIT 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@75 -- # iscsicleanup 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:15:25.967 fio: io_u error on file /dev/sda: No such device: read offset=52132864, buflen=512 00:15:25.967 fio: pid=71351, err=19/file:io_u.c:1889, func=io_u error, error=No such device 00:15:25.967 00:15:25.967 job0: (groupid=0, jobs=1): err=19 (file:io_u.c:1889, func=io_u error, error=No such device): pid=71351: Wed Jul 24 05:04:40 2024 00:15:25.967 read: IOPS=17.7k, BW=8846KiB/s (9059kB/s)(49.7MiB/5755msec) 00:15:25.967 slat (usec): min=2, max=930, avg= 5.38, stdev= 3.15 00:15:25.967 clat (nsec): min=1694, max=755248, avg=50739.55, stdev=8387.12 00:15:25.967 lat (usec): min=44, max=763, avg=56.11, stdev= 8.66 00:15:25.967 clat percentiles (usec): 00:15:25.967 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 48], 00:15:25.967 | 30.00th=[ 49], 40.00th=[ 49], 50.00th=[ 49], 60.00th=[ 49], 00:15:25.967 | 70.00th=[ 50], 80.00th=[ 52], 90.00th=[ 59], 95.00th=[ 61], 00:15:25.967 | 99.00th=[ 75], 99.50th=[ 81], 99.90th=[ 112], 99.95th=[ 161], 00:15:25.967 | 99.99th=[ 412] 00:15:25.967 bw ( KiB/s): min= 8072, max= 9103, per=100.00%, avg=8851.00, stdev=279.64, samples=11 00:15:25.967 iops : min=16144, max=18206, avg=17702.00, stdev=559.27, samples=11 00:15:25.967 lat (usec) : 2=0.01%, 20=0.01%, 50=73.94%, 100=25.93%, 250=0.10% 00:15:25.967 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:15:25.967 cpu : usr=4.52%, sys=13.73%, ctx=101832, majf=0, minf=2 00:15:25.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:25.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.967 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.967 issued rwts: total=101823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:25.967 00:15:25.967 Run status group 0 (all jobs): 00:15:25.967 READ: bw=8846KiB/s (9059kB/s), 8846KiB/s-8846KiB/s (9059kB/s-9059kB/s), io=49.7MiB (52.1MB), run=5755-5755msec 00:15:25.967 00:15:25.967 Disk stats (read/write): 00:15:25.967 sda: ios=99886/0, merge=0/0, ticks=4957/0, in_queue=4957, util=98.39% 00:15:25.967 Logging out of session [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:15:25.967 Logout of [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@983 -- # rm -rf 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@76 -- # killprocess 71252 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@948 -- # '[' -z 71252 ']' 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@952 -- # kill -0 71252 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@953 -- # uname 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71252 00:15:25.967 killing process with pid 71252 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71252' 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@967 -- # kill 71252 00:15:25.967 05:04:40 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@972 -- # wait 71252 00:15:28.501 05:04:42 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@77 -- # iscsitestfini 00:15:28.501 05:04:42 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:15:28.501 00:15:28.501 real 0m11.986s 00:15:28.501 user 0m9.592s 00:15:28.501 sys 0m2.237s 00:15:28.501 05:04:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.501 ************************************ 00:15:28.501 05:04:42 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:15:28.501 END TEST iscsi_tgt_reset 00:15:28.501 ************************************ 00:15:28.501 05:04:43 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@35 -- # run_test iscsi_tgt_rpc_config /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:15:28.501 05:04:43 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:28.501 05:04:43 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.501 05:04:43 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:15:28.501 ************************************ 00:15:28.501 START TEST iscsi_tgt_rpc_config 00:15:28.501 ************************************ 00:15:28.501 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:15:28.501 * Looking for test storage... 00:15:28.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@11 -- # iscsitestinit 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@16 -- # rpc_config_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@18 -- # timing_enter start_iscsi_tgt 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@21 -- # pid=71520 00:15:28.760 Process pid: 71520 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@22 -- # echo 'Process pid: 71520' 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@24 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@26 -- # waitforlisten 71520 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@20 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@829 -- # '[' -z 71520 ']' 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.760 05:04:43 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:15:28.760 [2024-07-24 05:04:43.269336] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:15:28.760 [2024-07-24 05:04:43.269505] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71520 ] 00:15:29.018 [2024-07-24 05:04:43.455187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.276 [2024-07-24 05:04:43.663989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.535 05:04:44 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.535 05:04:44 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@862 -- # return 0 00:15:29.535 05:04:44 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@28 -- # rpc_wait_pid=71539 00:15:29.535 05:04:44 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:15:29.535 05:04:44 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:15:29.793 05:04:44 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@32 -- # ps 71539 00:15:29.793 PID TTY STAT TIME COMMAND 00:15:29.793 71539 ? R 0:00 python3 /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:15:29.793 05:04:44 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:30.362 [2024-07-24 05:04:44.768494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:30.954 05:04:45 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@35 -- # sleep 1 00:15:31.888 iscsi_tgt is listening. Running tests... 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@36 -- # echo 'iscsi_tgt is listening. Running tests...' 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@39 -- # NOT ps 71539 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@648 -- # local es=0 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # valid_exec_arg ps 71539 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@636 -- # local arg=ps 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # type -t ps 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -P ps 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # arg=/usr/bin/ps 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/ps ]] 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # ps 71539 00:15:31.889 PID TTY STAT TIME COMMAND 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # es=1 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@43 -- # rpc_wait_pid=71575 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@44 -- # sleep 1 00:15:31.889 05:04:46 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@45 -- # NOT ps 71575 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@648 -- # local es=0 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # valid_exec_arg ps 71575 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@636 -- # local arg=ps 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # type -t ps 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -P ps 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # arg=/usr/bin/ps 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/ps ]] 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # ps 71575 00:15:33.265 PID TTY STAT TIME COMMAND 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # es=1 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@47 -- # timing_exit start_iscsi_tgt 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 05:04:47 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@49 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py /home/vagrant/spdk_repo/spdk/scripts/rpc.py 10.0.0.1 10.0.0.2 3260 10.0.0.2/32 spdk_iscsi_ns 00:15:55.191 [2024-07-24 05:05:07.959326] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:56.127 [2024-07-24 05:05:10.453489] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:57.064 verify_log_flag_rpc_methods passed 00:15:57.064 create_malloc_bdevs_rpc_methods passed 00:15:57.064 verify_portal_groups_rpc_methods passed 00:15:57.064 verify_initiator_groups_rpc_method passed. 00:15:57.064 This issue will be fixed later. 00:15:57.064 verify_target_nodes_rpc_methods passed. 00:15:57.064 verify_scsi_devices_rpc_methods passed 00:15:57.064 verify_iscsi_connection_rpc_methods passed 00:15:57.064 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:57.323 [ 00:15:57.323 { 00:15:57.323 "name": "Malloc0", 00:15:57.323 "aliases": [ 00:15:57.323 "ce6938df-addd-4e6a-95ee-ed425fc697f7" 00:15:57.323 ], 00:15:57.323 "product_name": "Malloc disk", 00:15:57.323 "block_size": 512, 00:15:57.323 "num_blocks": 131072, 00:15:57.323 "uuid": "ce6938df-addd-4e6a-95ee-ed425fc697f7", 00:15:57.323 "assigned_rate_limits": { 00:15:57.323 "rw_ios_per_sec": 0, 00:15:57.323 "rw_mbytes_per_sec": 0, 00:15:57.323 "r_mbytes_per_sec": 0, 00:15:57.323 "w_mbytes_per_sec": 0 00:15:57.323 }, 00:15:57.323 "claimed": false, 00:15:57.323 "zoned": false, 00:15:57.323 "supported_io_types": { 00:15:57.323 "read": true, 00:15:57.323 "write": true, 00:15:57.323 "unmap": true, 00:15:57.323 "flush": true, 00:15:57.323 "reset": true, 00:15:57.323 "nvme_admin": false, 00:15:57.323 "nvme_io": false, 00:15:57.323 "nvme_io_md": false, 00:15:57.323 "write_zeroes": true, 00:15:57.323 "zcopy": true, 00:15:57.323 "get_zone_info": false, 00:15:57.323 "zone_management": false, 00:15:57.323 "zone_append": false, 00:15:57.323 "compare": false, 00:15:57.323 "compare_and_write": false, 00:15:57.323 "abort": true, 00:15:57.323 "seek_hole": false, 00:15:57.323 "seek_data": false, 00:15:57.323 "copy": true, 00:15:57.323 "nvme_iov_md": false 00:15:57.323 }, 00:15:57.323 "memory_domains": [ 00:15:57.323 { 00:15:57.323 "dma_device_id": "system", 00:15:57.323 "dma_device_type": 1 00:15:57.323 }, 00:15:57.323 { 00:15:57.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.323 "dma_device_type": 2 00:15:57.323 } 00:15:57.323 ], 00:15:57.323 "driver_specific": {} 00:15:57.323 }, 00:15:57.323 { 00:15:57.323 "name": "Malloc1", 00:15:57.323 "aliases": [ 00:15:57.323 "21f2d80e-fa40-48b1-8bc8-551c0f01c8e0" 00:15:57.323 ], 00:15:57.323 "product_name": "Malloc disk", 00:15:57.324 "block_size": 512, 00:15:57.324 "num_blocks": 131072, 00:15:57.324 "uuid": "21f2d80e-fa40-48b1-8bc8-551c0f01c8e0", 00:15:57.324 "assigned_rate_limits": { 00:15:57.324 "rw_ios_per_sec": 0, 00:15:57.324 "rw_mbytes_per_sec": 0, 00:15:57.324 "r_mbytes_per_sec": 0, 00:15:57.324 "w_mbytes_per_sec": 0 00:15:57.324 }, 00:15:57.324 "claimed": false, 00:15:57.324 "zoned": false, 00:15:57.324 "supported_io_types": { 00:15:57.324 "read": true, 00:15:57.324 "write": true, 00:15:57.324 "unmap": true, 00:15:57.324 "flush": true, 00:15:57.324 "reset": true, 00:15:57.324 "nvme_admin": false, 00:15:57.324 "nvme_io": false, 00:15:57.324 "nvme_io_md": false, 00:15:57.324 "write_zeroes": true, 00:15:57.324 "zcopy": true, 00:15:57.324 "get_zone_info": false, 00:15:57.324 "zone_management": false, 00:15:57.324 "zone_append": false, 00:15:57.324 "compare": false, 00:15:57.324 "compare_and_write": false, 00:15:57.324 "abort": true, 00:15:57.324 "seek_hole": false, 00:15:57.324 "seek_data": false, 00:15:57.324 "copy": true, 00:15:57.324 "nvme_iov_md": false 00:15:57.324 }, 00:15:57.324 "memory_domains": [ 00:15:57.324 { 00:15:57.324 "dma_device_id": "system", 00:15:57.324 "dma_device_type": 1 00:15:57.324 }, 00:15:57.324 { 00:15:57.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.324 "dma_device_type": 2 00:15:57.324 } 00:15:57.324 ], 00:15:57.324 "driver_specific": {} 00:15:57.324 }, 00:15:57.324 { 00:15:57.324 "name": "Malloc2", 00:15:57.324 "aliases": [ 00:15:57.324 "82c452fb-6aad-433d-9c18-ca9c3e36230d" 00:15:57.324 ], 00:15:57.324 "product_name": "Malloc disk", 00:15:57.324 "block_size": 512, 00:15:57.324 "num_blocks": 131072, 00:15:57.324 "uuid": "82c452fb-6aad-433d-9c18-ca9c3e36230d", 00:15:57.324 "assigned_rate_limits": { 00:15:57.324 "rw_ios_per_sec": 0, 00:15:57.324 "rw_mbytes_per_sec": 0, 00:15:57.324 "r_mbytes_per_sec": 0, 00:15:57.324 "w_mbytes_per_sec": 0 00:15:57.324 }, 00:15:57.324 "claimed": false, 00:15:57.324 "zoned": false, 00:15:57.324 "supported_io_types": { 00:15:57.324 "read": true, 00:15:57.324 "write": true, 00:15:57.324 "unmap": true, 00:15:57.324 "flush": true, 00:15:57.324 "reset": true, 00:15:57.324 "nvme_admin": false, 00:15:57.324 "nvme_io": false, 00:15:57.324 "nvme_io_md": false, 00:15:57.324 "write_zeroes": true, 00:15:57.324 "zcopy": true, 00:15:57.324 "get_zone_info": false, 00:15:57.324 "zone_management": false, 00:15:57.324 "zone_append": false, 00:15:57.324 "compare": false, 00:15:57.324 "compare_and_write": false, 00:15:57.324 "abort": true, 00:15:57.324 "seek_hole": false, 00:15:57.324 "seek_data": false, 00:15:57.324 "copy": true, 00:15:57.324 "nvme_iov_md": false 00:15:57.324 }, 00:15:57.324 "memory_domains": [ 00:15:57.324 { 00:15:57.324 "dma_device_id": "system", 00:15:57.324 "dma_device_type": 1 00:15:57.324 }, 00:15:57.324 { 00:15:57.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.324 "dma_device_type": 2 00:15:57.324 } 00:15:57.324 ], 00:15:57.324 "driver_specific": {} 00:15:57.324 }, 00:15:57.324 { 00:15:57.324 "name": "Malloc3", 00:15:57.324 "aliases": [ 00:15:57.324 "5099c0f1-88a3-43ef-b461-3b1c32dbb4f1" 00:15:57.324 ], 00:15:57.324 "product_name": "Malloc disk", 00:15:57.324 "block_size": 512, 00:15:57.324 "num_blocks": 131072, 00:15:57.324 "uuid": "5099c0f1-88a3-43ef-b461-3b1c32dbb4f1", 00:15:57.324 "assigned_rate_limits": { 00:15:57.324 "rw_ios_per_sec": 0, 00:15:57.324 "rw_mbytes_per_sec": 0, 00:15:57.324 "r_mbytes_per_sec": 0, 00:15:57.324 "w_mbytes_per_sec": 0 00:15:57.324 }, 00:15:57.324 "claimed": false, 00:15:57.324 "zoned": false, 00:15:57.324 "supported_io_types": { 00:15:57.324 "read": true, 00:15:57.324 "write": true, 00:15:57.324 "unmap": true, 00:15:57.324 "flush": true, 00:15:57.324 "reset": true, 00:15:57.324 "nvme_admin": false, 00:15:57.324 "nvme_io": false, 00:15:57.324 "nvme_io_md": false, 00:15:57.324 "write_zeroes": true, 00:15:57.324 "zcopy": true, 00:15:57.324 "get_zone_info": false, 00:15:57.324 "zone_management": false, 00:15:57.324 "zone_append": false, 00:15:57.324 "compare": false, 00:15:57.324 "compare_and_write": false, 00:15:57.324 "abort": true, 00:15:57.324 "seek_hole": false, 00:15:57.324 "seek_data": false, 00:15:57.324 "copy": true, 00:15:57.324 "nvme_iov_md": false 00:15:57.324 }, 00:15:57.324 "memory_domains": [ 00:15:57.324 { 00:15:57.324 "dma_device_id": "system", 00:15:57.324 "dma_device_type": 1 00:15:57.324 }, 00:15:57.324 { 00:15:57.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.324 "dma_device_type": 2 00:15:57.324 } 00:15:57.324 ], 00:15:57.324 "driver_specific": {} 00:15:57.324 }, 00:15:57.324 { 00:15:57.324 "name": "Malloc4", 00:15:57.324 "aliases": [ 00:15:57.324 "5169c89a-ffe7-4c4e-b6a1-ba9c068620b4" 00:15:57.324 ], 00:15:57.324 "product_name": "Malloc disk", 00:15:57.324 "block_size": 512, 00:15:57.324 "num_blocks": 131072, 00:15:57.324 "uuid": "5169c89a-ffe7-4c4e-b6a1-ba9c068620b4", 00:15:57.324 "assigned_rate_limits": { 00:15:57.324 "rw_ios_per_sec": 0, 00:15:57.324 "rw_mbytes_per_sec": 0, 00:15:57.324 "r_mbytes_per_sec": 0, 00:15:57.324 "w_mbytes_per_sec": 0 00:15:57.324 }, 00:15:57.324 "claimed": false, 00:15:57.324 "zoned": false, 00:15:57.324 "supported_io_types": { 00:15:57.324 "read": true, 00:15:57.324 "write": true, 00:15:57.324 "unmap": true, 00:15:57.324 "flush": true, 00:15:57.324 "reset": true, 00:15:57.324 "nvme_admin": false, 00:15:57.324 "nvme_io": false, 00:15:57.324 "nvme_io_md": false, 00:15:57.324 "write_zeroes": true, 00:15:57.324 "zcopy": true, 00:15:57.324 "get_zone_info": false, 00:15:57.324 "zone_management": false, 00:15:57.324 "zone_append": false, 00:15:57.324 "compare": false, 00:15:57.324 "compare_and_write": false, 00:15:57.324 "abort": true, 00:15:57.324 "seek_hole": false, 00:15:57.324 "seek_data": false, 00:15:57.324 "copy": true, 00:15:57.324 "nvme_iov_md": false 00:15:57.324 }, 00:15:57.324 "memory_domains": [ 00:15:57.324 { 00:15:57.324 "dma_device_id": "system", 00:15:57.324 "dma_device_type": 1 00:15:57.324 }, 00:15:57.324 { 00:15:57.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.324 "dma_device_type": 2 00:15:57.324 } 00:15:57.324 ], 00:15:57.324 "driver_specific": {} 00:15:57.324 }, 00:15:57.324 { 00:15:57.324 "name": "Malloc5", 00:15:57.324 "aliases": [ 00:15:57.324 "a58ccc92-d0b3-4b2e-b1e0-0db88bfe0bb6" 00:15:57.324 ], 00:15:57.324 "product_name": "Malloc disk", 00:15:57.324 "block_size": 512, 00:15:57.324 "num_blocks": 131072, 00:15:57.324 "uuid": "a58ccc92-d0b3-4b2e-b1e0-0db88bfe0bb6", 00:15:57.324 "assigned_rate_limits": { 00:15:57.324 "rw_ios_per_sec": 0, 00:15:57.324 "rw_mbytes_per_sec": 0, 00:15:57.324 "r_mbytes_per_sec": 0, 00:15:57.324 "w_mbytes_per_sec": 0 00:15:57.324 }, 00:15:57.324 "claimed": false, 00:15:57.324 "zoned": false, 00:15:57.324 "supported_io_types": { 00:15:57.324 "read": true, 00:15:57.324 "write": true, 00:15:57.324 "unmap": true, 00:15:57.324 "flush": true, 00:15:57.324 "reset": true, 00:15:57.324 "nvme_admin": false, 00:15:57.324 "nvme_io": false, 00:15:57.324 "nvme_io_md": false, 00:15:57.324 "write_zeroes": true, 00:15:57.324 "zcopy": true, 00:15:57.324 "get_zone_info": false, 00:15:57.324 "zone_management": false, 00:15:57.324 "zone_append": false, 00:15:57.324 "compare": false, 00:15:57.324 "compare_and_write": false, 00:15:57.324 "abort": true, 00:15:57.324 "seek_hole": false, 00:15:57.324 "seek_data": false, 00:15:57.324 "copy": true, 00:15:57.324 "nvme_iov_md": false 00:15:57.324 }, 00:15:57.324 "memory_domains": [ 00:15:57.324 { 00:15:57.324 "dma_device_id": "system", 00:15:57.324 "dma_device_type": 1 00:15:57.324 }, 00:15:57.324 { 00:15:57.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.324 "dma_device_type": 2 00:15:57.324 } 00:15:57.324 ], 00:15:57.324 "driver_specific": {} 00:15:57.324 } 00:15:57.324 ] 00:15:57.324 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@53 -- # trap - SIGINT SIGTERM EXIT 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@55 -- # iscsicleanup 00:15:57.325 Cleaning up iSCSI connection 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:15:57.325 iscsiadm: No matching sessions found 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@981 -- # true 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:15:57.325 iscsiadm: No records found 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # true 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@983 -- # rm -rf 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@56 -- # killprocess 71520 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@948 -- # '[' -z 71520 ']' 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@952 -- # kill -0 71520 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@953 -- # uname 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71520 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:57.325 killing process with pid 71520 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71520' 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@967 -- # kill 71520 00:15:57.325 05:05:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@972 -- # wait 71520 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@58 -- # iscsitestfini 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:16:01.515 00:16:01.515 real 0m32.423s 00:16:01.515 user 0m51.659s 00:16:01.515 sys 0m4.803s 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:16:01.515 ************************************ 00:16:01.515 END TEST iscsi_tgt_rpc_config 00:16:01.515 ************************************ 00:16:01.515 05:05:15 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@36 -- # run_test iscsi_tgt_iscsi_lvol /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:16:01.515 05:05:15 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:01.515 05:05:15 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.515 05:05:15 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:16:01.515 ************************************ 00:16:01.515 START TEST iscsi_tgt_iscsi_lvol 00:16:01.515 ************************************ 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:16:01.515 * Looking for test storage... 00:16:01.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@11 -- # iscsitestinit 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@13 -- # MALLOC_BDEV_SIZE=128 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@15 -- # '[' 1 -eq 1 ']' 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@16 -- # NUM_LVS=10 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@17 -- # NUM_LVOL=10 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@23 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@24 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@26 -- # timing_enter start_iscsi_tgt 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@29 -- # pid=72104 00:16:01.515 Process pid: 72104 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@30 -- # echo 'Process pid: 72104' 00:16:01.515 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@32 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:16:01.516 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@28 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:16:01.516 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@34 -- # waitforlisten 72104 00:16:01.516 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@829 -- # '[' -z 72104 ']' 00:16:01.516 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.516 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.516 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.516 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.516 05:05:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:01.516 [2024-07-24 05:05:15.764363] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:16:01.516 [2024-07-24 05:05:15.764560] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72104 ] 00:16:01.516 [2024-07-24 05:05:15.947886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.774 [2024-07-24 05:05:16.172370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.774 [2024-07-24 05:05:16.172578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.774 [2024-07-24 05:05:16.172681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.774 [2024-07-24 05:05:16.172715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:02.032 05:05:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.032 05:05:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:02.032 05:05:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:16:02.290 05:05:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:02.857 [2024-07-24 05:05:17.243560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:03.429 iscsi_tgt is listening. Running tests... 00:16:03.429 05:05:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@37 -- # echo 'iscsi_tgt is listening. Running tests...' 00:16:03.429 05:05:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@39 -- # timing_exit start_iscsi_tgt 00:16:03.429 05:05:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:03.429 05:05:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:03.429 05:05:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@41 -- # timing_enter setup 00:16:03.429 05:05:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:03.429 05:05:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:03.429 05:05:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:16:03.686 05:05:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # seq 1 10 00:16:03.686 05:05:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:16:03.686 05:05:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=3 00:16:03.686 05:05:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 3 ANY 10.0.0.2/32 00:16:03.943 05:05:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 1 -eq 1 ']' 00:16:03.943 05:05:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:16:04.509 05:05:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # malloc_bdevs='Malloc0 ' 00:16:04.509 05:05:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:16:04.766 05:05:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # malloc_bdevs+=Malloc1 00:16:04.766 05:05:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:05.024 05:05:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@53 -- # bdev=raid0 00:16:05.024 05:05:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs_1 -c 1048576 00:16:05.282 05:05:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=a129e795-bfc9-4542-85d7-fef21d054685 00:16:05.282 05:05:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:16:05.282 05:05:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:16:05.282 05:05:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:05.282 05:05:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a129e795-bfc9-4542-85d7-fef21d054685 lbd_1 10 00:16:05.539 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=adb80d91-0954-4a00-8989-0a84d22fa69d 00:16:05.539 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='adb80d91-0954-4a00-8989-0a84d22fa69d:0 ' 00:16:05.539 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:05.539 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a129e795-bfc9-4542-85d7-fef21d054685 lbd_2 10 00:16:05.797 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b8019c25-0fde-4dc1-b5d5-b5c30a330db1 00:16:05.797 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b8019c25-0fde-4dc1-b5d5-b5c30a330db1:1 ' 00:16:05.797 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:05.797 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a129e795-bfc9-4542-85d7-fef21d054685 lbd_3 10 00:16:06.055 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1e76b56e-a97b-470c-b163-dc934295687c 00:16:06.055 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1e76b56e-a97b-470c-b163-dc934295687c:2 ' 00:16:06.055 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:06.055 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a129e795-bfc9-4542-85d7-fef21d054685 lbd_4 10 00:16:06.055 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=79a8d6d6-674d-4580-8bcb-3b7a8e6e0b78 00:16:06.055 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='79a8d6d6-674d-4580-8bcb-3b7a8e6e0b78:3 ' 00:16:06.055 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:06.055 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a129e795-bfc9-4542-85d7-fef21d054685 lbd_5 10 00:16:06.312 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9d691369-992a-4f91-9da1-c191626ba99f 00:16:06.312 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9d691369-992a-4f91-9da1-c191626ba99f:4 ' 00:16:06.312 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:06.312 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a129e795-bfc9-4542-85d7-fef21d054685 lbd_6 10 00:16:06.570 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d7658197-df7d-4505-b0c6-7a1e5ca5c6c5 00:16:06.570 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d7658197-df7d-4505-b0c6-7a1e5ca5c6c5:5 ' 00:16:06.570 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:06.570 05:05:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a129e795-bfc9-4542-85d7-fef21d054685 lbd_7 10 00:16:06.570 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3faa873a-d9d4-4566-b8f6-fa67728a28a5 00:16:06.570 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3faa873a-d9d4-4566-b8f6-fa67728a28a5:6 ' 00:16:06.570 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:06.570 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a129e795-bfc9-4542-85d7-fef21d054685 lbd_8 10 00:16:06.827 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=bd6a4d05-cb34-450e-a5d1-75abd801fbb6 00:16:06.827 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='bd6a4d05-cb34-450e-a5d1-75abd801fbb6:7 ' 00:16:06.827 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:06.827 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a129e795-bfc9-4542-85d7-fef21d054685 lbd_9 10 00:16:07.084 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8257526f-70d8-4dde-af82-5c4534614303 00:16:07.084 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8257526f-70d8-4dde-af82-5c4534614303:8 ' 00:16:07.084 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:07.084 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a129e795-bfc9-4542-85d7-fef21d054685 lbd_10 10 00:16:07.342 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9e070aa8-7552-4b1c-b420-b49d5f757007 00:16:07.342 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9e070aa8-7552-4b1c-b420-b49d5f757007:9 ' 00:16:07.342 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias 'adb80d91-0954-4a00-8989-0a84d22fa69d:0 b8019c25-0fde-4dc1-b5d5-b5c30a330db1:1 1e76b56e-a97b-470c-b163-dc934295687c:2 79a8d6d6-674d-4580-8bcb-3b7a8e6e0b78:3 9d691369-992a-4f91-9da1-c191626ba99f:4 d7658197-df7d-4505-b0c6-7a1e5ca5c6c5:5 3faa873a-d9d4-4566-b8f6-fa67728a28a5:6 bd6a4d05-cb34-450e-a5d1-75abd801fbb6:7 8257526f-70d8-4dde-af82-5c4534614303:8 9e070aa8-7552-4b1c-b420-b49d5f757007:9 ' 1:3 256 -d 00:16:07.342 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:16:07.342 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=4 00:16:07.342 05:05:21 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 4 ANY 10.0.0.2/32 00:16:07.600 05:05:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 2 -eq 1 ']' 00:16:07.600 05:05:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:16:08.164 05:05:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc2 00:16:08.164 05:05:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc2 lvs_2 -c 1048576 00:16:08.422 05:05:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=ed450a89-7aa9-4be9-9f00-0f366b288354 00:16:08.422 05:05:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:16:08.422 05:05:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:16:08.422 05:05:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:08.422 05:05:22 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ed450a89-7aa9-4be9-9f00-0f366b288354 lbd_1 10 00:16:08.680 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d085ce1f-fa71-49e8-88ff-c4c0a39f4ced 00:16:08.680 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d085ce1f-fa71-49e8-88ff-c4c0a39f4ced:0 ' 00:16:08.680 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:08.680 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ed450a89-7aa9-4be9-9f00-0f366b288354 lbd_2 10 00:16:08.680 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=36c4d436-fccd-41ea-9815-fbed5f7697f9 00:16:08.680 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='36c4d436-fccd-41ea-9815-fbed5f7697f9:1 ' 00:16:08.680 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:08.680 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ed450a89-7aa9-4be9-9f00-0f366b288354 lbd_3 10 00:16:08.936 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=641ab1d6-7741-46e2-9e3e-9ac520a5ca4a 00:16:08.936 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='641ab1d6-7741-46e2-9e3e-9ac520a5ca4a:2 ' 00:16:08.936 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:08.936 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ed450a89-7aa9-4be9-9f00-0f366b288354 lbd_4 10 00:16:09.194 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3541e007-7904-4198-abf4-82df6993e8fa 00:16:09.194 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3541e007-7904-4198-abf4-82df6993e8fa:3 ' 00:16:09.194 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:09.194 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ed450a89-7aa9-4be9-9f00-0f366b288354 lbd_5 10 00:16:09.451 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a29bdc8a-a10c-4bde-8ea1-72c4fbe4657a 00:16:09.451 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a29bdc8a-a10c-4bde-8ea1-72c4fbe4657a:4 ' 00:16:09.451 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:09.451 05:05:23 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ed450a89-7aa9-4be9-9f00-0f366b288354 lbd_6 10 00:16:09.451 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=45a4cbb5-decd-44d8-8101-cc94b911edc0 00:16:09.451 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='45a4cbb5-decd-44d8-8101-cc94b911edc0:5 ' 00:16:09.451 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:09.451 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ed450a89-7aa9-4be9-9f00-0f366b288354 lbd_7 10 00:16:09.709 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=95f14920-ff5b-4f3c-a18c-8b64e2446d2d 00:16:09.709 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='95f14920-ff5b-4f3c-a18c-8b64e2446d2d:6 ' 00:16:09.709 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:09.709 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ed450a89-7aa9-4be9-9f00-0f366b288354 lbd_8 10 00:16:09.966 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0f122545-787d-498d-b30a-8b458a1e201a 00:16:09.966 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0f122545-787d-498d-b30a-8b458a1e201a:7 ' 00:16:09.966 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:09.966 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ed450a89-7aa9-4be9-9f00-0f366b288354 lbd_9 10 00:16:10.223 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=50ae77dd-9c10-4656-b7c7-172423c68aeb 00:16:10.223 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='50ae77dd-9c10-4656-b7c7-172423c68aeb:8 ' 00:16:10.223 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:10.223 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ed450a89-7aa9-4be9-9f00-0f366b288354 lbd_10 10 00:16:10.481 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=17005acb-5e6b-4309-8357-b22a49112ef1 00:16:10.481 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='17005acb-5e6b-4309-8357-b22a49112ef1:9 ' 00:16:10.481 05:05:24 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias 'd085ce1f-fa71-49e8-88ff-c4c0a39f4ced:0 36c4d436-fccd-41ea-9815-fbed5f7697f9:1 641ab1d6-7741-46e2-9e3e-9ac520a5ca4a:2 3541e007-7904-4198-abf4-82df6993e8fa:3 a29bdc8a-a10c-4bde-8ea1-72c4fbe4657a:4 45a4cbb5-decd-44d8-8101-cc94b911edc0:5 95f14920-ff5b-4f3c-a18c-8b64e2446d2d:6 0f122545-787d-498d-b30a-8b458a1e201a:7 50ae77dd-9c10-4656-b7c7-172423c68aeb:8 17005acb-5e6b-4309-8357-b22a49112ef1:9 ' 1:4 256 -d 00:16:10.481 05:05:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:16:10.481 05:05:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=5 00:16:10.481 05:05:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 5 ANY 10.0.0.2/32 00:16:10.739 05:05:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 3 -eq 1 ']' 00:16:10.739 05:05:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:16:11.305 05:05:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc3 00:16:11.305 05:05:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc3 lvs_3 -c 1048576 00:16:11.563 05:05:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=d990d2c5-8dbc-4776-aa65-bd7fc62686f0 00:16:11.563 05:05:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:16:11.563 05:05:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:16:11.563 05:05:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:11.563 05:05:25 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d990d2c5-8dbc-4776-aa65-bd7fc62686f0 lbd_1 10 00:16:11.563 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3422bea5-0c7f-4a6b-8944-d8d9286383df 00:16:11.563 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3422bea5-0c7f-4a6b-8944-d8d9286383df:0 ' 00:16:11.563 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:11.563 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d990d2c5-8dbc-4776-aa65-bd7fc62686f0 lbd_2 10 00:16:11.821 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2e459ebe-3a86-4521-a6e9-b22396a134d3 00:16:11.821 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2e459ebe-3a86-4521-a6e9-b22396a134d3:1 ' 00:16:11.821 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:11.821 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d990d2c5-8dbc-4776-aa65-bd7fc62686f0 lbd_3 10 00:16:12.079 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a039c6fe-f11f-4a49-a7b9-aa3ef6156c09 00:16:12.079 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a039c6fe-f11f-4a49-a7b9-aa3ef6156c09:2 ' 00:16:12.079 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:12.079 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d990d2c5-8dbc-4776-aa65-bd7fc62686f0 lbd_4 10 00:16:12.341 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a7269b56-0d62-4538-b771-adaa5d77153e 00:16:12.341 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a7269b56-0d62-4538-b771-adaa5d77153e:3 ' 00:16:12.341 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:12.341 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d990d2c5-8dbc-4776-aa65-bd7fc62686f0 lbd_5 10 00:16:12.341 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7dd21bb3-bf4b-47f9-b018-ca44de3c414e 00:16:12.341 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7dd21bb3-bf4b-47f9-b018-ca44de3c414e:4 ' 00:16:12.341 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:12.341 05:05:26 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d990d2c5-8dbc-4776-aa65-bd7fc62686f0 lbd_6 10 00:16:12.598 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9c199ffc-d238-441f-84fe-0e55adec82a8 00:16:12.598 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9c199ffc-d238-441f-84fe-0e55adec82a8:5 ' 00:16:12.598 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:12.598 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d990d2c5-8dbc-4776-aa65-bd7fc62686f0 lbd_7 10 00:16:12.854 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2fcf7304-3821-47a4-84f9-36d335af7bdf 00:16:12.854 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2fcf7304-3821-47a4-84f9-36d335af7bdf:6 ' 00:16:12.854 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:12.854 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d990d2c5-8dbc-4776-aa65-bd7fc62686f0 lbd_8 10 00:16:13.110 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=6e8e2709-2a3e-43f3-85cf-366bf436b310 00:16:13.111 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='6e8e2709-2a3e-43f3-85cf-366bf436b310:7 ' 00:16:13.111 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:13.111 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d990d2c5-8dbc-4776-aa65-bd7fc62686f0 lbd_9 10 00:16:13.111 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b050f2c3-f0a2-4f69-9d20-d0349de14890 00:16:13.111 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b050f2c3-f0a2-4f69-9d20-d0349de14890:8 ' 00:16:13.111 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:13.111 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d990d2c5-8dbc-4776-aa65-bd7fc62686f0 lbd_10 10 00:16:13.368 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5c49a6f2-5873-40b4-a3e8-3ebde7dd07f6 00:16:13.368 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5c49a6f2-5873-40b4-a3e8-3ebde7dd07f6:9 ' 00:16:13.368 05:05:27 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias '3422bea5-0c7f-4a6b-8944-d8d9286383df:0 2e459ebe-3a86-4521-a6e9-b22396a134d3:1 a039c6fe-f11f-4a49-a7b9-aa3ef6156c09:2 a7269b56-0d62-4538-b771-adaa5d77153e:3 7dd21bb3-bf4b-47f9-b018-ca44de3c414e:4 9c199ffc-d238-441f-84fe-0e55adec82a8:5 2fcf7304-3821-47a4-84f9-36d335af7bdf:6 6e8e2709-2a3e-43f3-85cf-366bf436b310:7 b050f2c3-f0a2-4f69-9d20-d0349de14890:8 5c49a6f2-5873-40b4-a3e8-3ebde7dd07f6:9 ' 1:5 256 -d 00:16:13.626 05:05:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:16:13.626 05:05:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=6 00:16:13.626 05:05:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 6 ANY 10.0.0.2/32 00:16:13.626 05:05:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 4 -eq 1 ']' 00:16:13.626 05:05:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:16:14.195 05:05:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc4 00:16:14.195 05:05:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc4 lvs_4 -c 1048576 00:16:14.453 05:05:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=fa66541e-d04c-44e9-b56b-1508c4ac7578 00:16:14.453 05:05:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:16:14.453 05:05:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:16:14.453 05:05:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:14.453 05:05:28 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fa66541e-d04c-44e9-b56b-1508c4ac7578 lbd_1 10 00:16:14.711 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=069218ed-98ac-4cb4-a5d5-861e2ce626c5 00:16:14.711 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='069218ed-98ac-4cb4-a5d5-861e2ce626c5:0 ' 00:16:14.711 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:14.711 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fa66541e-d04c-44e9-b56b-1508c4ac7578 lbd_2 10 00:16:14.711 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3bf63dec-cacd-49e7-885b-7e4de6bc1edc 00:16:14.711 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3bf63dec-cacd-49e7-885b-7e4de6bc1edc:1 ' 00:16:14.711 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:14.711 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fa66541e-d04c-44e9-b56b-1508c4ac7578 lbd_3 10 00:16:14.969 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c01253fa-b9fa-4df3-bbb9-3213f3466e56 00:16:14.969 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c01253fa-b9fa-4df3-bbb9-3213f3466e56:2 ' 00:16:14.969 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:14.969 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fa66541e-d04c-44e9-b56b-1508c4ac7578 lbd_4 10 00:16:15.227 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1e98e9f8-d22d-40d4-8292-9be3cefa612b 00:16:15.227 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1e98e9f8-d22d-40d4-8292-9be3cefa612b:3 ' 00:16:15.227 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:15.227 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fa66541e-d04c-44e9-b56b-1508c4ac7578 lbd_5 10 00:16:15.227 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=40606329-5ca8-4c57-8044-55fee77741b7 00:16:15.227 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='40606329-5ca8-4c57-8044-55fee77741b7:4 ' 00:16:15.227 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:15.227 05:05:29 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fa66541e-d04c-44e9-b56b-1508c4ac7578 lbd_6 10 00:16:15.484 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f46462d5-970c-4b5e-9522-53f71ce97c9a 00:16:15.484 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f46462d5-970c-4b5e-9522-53f71ce97c9a:5 ' 00:16:15.484 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:15.484 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fa66541e-d04c-44e9-b56b-1508c4ac7578 lbd_7 10 00:16:15.742 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=03bbca14-41a2-4bd1-b7fb-32628286f66a 00:16:15.742 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='03bbca14-41a2-4bd1-b7fb-32628286f66a:6 ' 00:16:15.742 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:15.742 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fa66541e-d04c-44e9-b56b-1508c4ac7578 lbd_8 10 00:16:16.001 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=dc841f4f-258b-47c1-a907-e3c47a6ec0cb 00:16:16.001 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='dc841f4f-258b-47c1-a907-e3c47a6ec0cb:7 ' 00:16:16.001 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:16.001 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fa66541e-d04c-44e9-b56b-1508c4ac7578 lbd_9 10 00:16:16.001 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=474a4e44-f3dc-4e30-967d-1e1250749849 00:16:16.001 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='474a4e44-f3dc-4e30-967d-1e1250749849:8 ' 00:16:16.001 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:16.001 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fa66541e-d04c-44e9-b56b-1508c4ac7578 lbd_10 10 00:16:16.259 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2f2c0f4c-d79d-4e50-a746-f2f2db4d4e88 00:16:16.259 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2f2c0f4c-d79d-4e50-a746-f2f2db4d4e88:9 ' 00:16:16.259 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias '069218ed-98ac-4cb4-a5d5-861e2ce626c5:0 3bf63dec-cacd-49e7-885b-7e4de6bc1edc:1 c01253fa-b9fa-4df3-bbb9-3213f3466e56:2 1e98e9f8-d22d-40d4-8292-9be3cefa612b:3 40606329-5ca8-4c57-8044-55fee77741b7:4 f46462d5-970c-4b5e-9522-53f71ce97c9a:5 03bbca14-41a2-4bd1-b7fb-32628286f66a:6 dc841f4f-258b-47c1-a907-e3c47a6ec0cb:7 474a4e44-f3dc-4e30-967d-1e1250749849:8 2f2c0f4c-d79d-4e50-a746-f2f2db4d4e88:9 ' 1:6 256 -d 00:16:16.517 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:16:16.517 05:05:30 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=7 00:16:16.517 05:05:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 7 ANY 10.0.0.2/32 00:16:16.775 05:05:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 5 -eq 1 ']' 00:16:16.775 05:05:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:16:17.033 05:05:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc5 00:16:17.034 05:05:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc5 lvs_5 -c 1048576 00:16:17.292 05:05:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=af2e0f5f-4543-4c9c-bf64-c889798afb77 00:16:17.292 05:05:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:16:17.292 05:05:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:16:17.292 05:05:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:17.292 05:05:31 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u af2e0f5f-4543-4c9c-bf64-c889798afb77 lbd_1 10 00:16:17.550 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e07c4375-fc60-44fb-a076-9d7060ad3c4e 00:16:17.550 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e07c4375-fc60-44fb-a076-9d7060ad3c4e:0 ' 00:16:17.550 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:17.550 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u af2e0f5f-4543-4c9c-bf64-c889798afb77 lbd_2 10 00:16:17.809 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0bff121b-f562-4bd1-a18e-7280f3de914b 00:16:17.809 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0bff121b-f562-4bd1-a18e-7280f3de914b:1 ' 00:16:17.809 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:17.809 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u af2e0f5f-4543-4c9c-bf64-c889798afb77 lbd_3 10 00:16:17.809 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=516639b6-6c76-4582-b2e3-2020b117907d 00:16:17.809 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='516639b6-6c76-4582-b2e3-2020b117907d:2 ' 00:16:17.809 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:17.809 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u af2e0f5f-4543-4c9c-bf64-c889798afb77 lbd_4 10 00:16:18.067 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ad43b435-1659-4f0a-b5a1-fd65127432df 00:16:18.067 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ad43b435-1659-4f0a-b5a1-fd65127432df:3 ' 00:16:18.067 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:18.067 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u af2e0f5f-4543-4c9c-bf64-c889798afb77 lbd_5 10 00:16:18.325 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=261a1075-698a-41e2-9ef0-1a51c9667058 00:16:18.325 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='261a1075-698a-41e2-9ef0-1a51c9667058:4 ' 00:16:18.325 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:18.325 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u af2e0f5f-4543-4c9c-bf64-c889798afb77 lbd_6 10 00:16:18.583 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c794f73c-5578-4ec3-9186-88c5ad27f4e4 00:16:18.583 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c794f73c-5578-4ec3-9186-88c5ad27f4e4:5 ' 00:16:18.583 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:18.583 05:05:32 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u af2e0f5f-4543-4c9c-bf64-c889798afb77 lbd_7 10 00:16:18.583 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d033ff16-00c6-49d0-a500-0a2391c18923 00:16:18.583 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d033ff16-00c6-49d0-a500-0a2391c18923:6 ' 00:16:18.583 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:18.583 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u af2e0f5f-4543-4c9c-bf64-c889798afb77 lbd_8 10 00:16:18.842 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e1291d38-3514-4441-8264-a720eebb7ec1 00:16:18.842 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e1291d38-3514-4441-8264-a720eebb7ec1:7 ' 00:16:18.842 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:18.842 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u af2e0f5f-4543-4c9c-bf64-c889798afb77 lbd_9 10 00:16:19.099 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=6f47300f-fafb-42a8-b59a-4f55d9253675 00:16:19.099 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='6f47300f-fafb-42a8-b59a-4f55d9253675:8 ' 00:16:19.099 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:19.099 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u af2e0f5f-4543-4c9c-bf64-c889798afb77 lbd_10 10 00:16:19.099 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3961082f-feeb-4b83-a8c8-8452b7a94063 00:16:19.099 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3961082f-feeb-4b83-a8c8-8452b7a94063:9 ' 00:16:19.099 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias 'e07c4375-fc60-44fb-a076-9d7060ad3c4e:0 0bff121b-f562-4bd1-a18e-7280f3de914b:1 516639b6-6c76-4582-b2e3-2020b117907d:2 ad43b435-1659-4f0a-b5a1-fd65127432df:3 261a1075-698a-41e2-9ef0-1a51c9667058:4 c794f73c-5578-4ec3-9186-88c5ad27f4e4:5 d033ff16-00c6-49d0-a500-0a2391c18923:6 e1291d38-3514-4441-8264-a720eebb7ec1:7 6f47300f-fafb-42a8-b59a-4f55d9253675:8 3961082f-feeb-4b83-a8c8-8452b7a94063:9 ' 1:7 256 -d 00:16:19.357 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:16:19.357 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=8 00:16:19.357 05:05:33 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 8 ANY 10.0.0.2/32 00:16:19.614 05:05:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 6 -eq 1 ']' 00:16:19.614 05:05:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:16:19.871 05:05:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc6 00:16:19.871 05:05:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc6 lvs_6 -c 1048576 00:16:20.129 05:05:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=dadef4c9-1496-4397-95c2-d8d099b5df84 00:16:20.129 05:05:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:16:20.129 05:05:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:16:20.129 05:05:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:20.129 05:05:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dadef4c9-1496-4397-95c2-d8d099b5df84 lbd_1 10 00:16:20.387 05:05:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=fb412979-6d78-4a27-a576-ec3e489f4380 00:16:20.387 05:05:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='fb412979-6d78-4a27-a576-ec3e489f4380:0 ' 00:16:20.387 05:05:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:20.387 05:05:34 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dadef4c9-1496-4397-95c2-d8d099b5df84 lbd_2 10 00:16:20.645 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e9e77f70-b8a9-4746-b13b-615f4db06df6 00:16:20.645 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e9e77f70-b8a9-4746-b13b-615f4db06df6:1 ' 00:16:20.645 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:20.645 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dadef4c9-1496-4397-95c2-d8d099b5df84 lbd_3 10 00:16:20.645 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=83bab3f6-4593-4c0c-8a02-1e71940b22d9 00:16:20.645 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='83bab3f6-4593-4c0c-8a02-1e71940b22d9:2 ' 00:16:20.645 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:20.645 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dadef4c9-1496-4397-95c2-d8d099b5df84 lbd_4 10 00:16:20.903 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=449c65e2-a0e8-4fab-8c49-a636c4cd4f42 00:16:20.903 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='449c65e2-a0e8-4fab-8c49-a636c4cd4f42:3 ' 00:16:20.903 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:20.903 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dadef4c9-1496-4397-95c2-d8d099b5df84 lbd_5 10 00:16:21.161 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=04c7891a-d08b-4a01-9f6e-17b960acf1ab 00:16:21.161 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='04c7891a-d08b-4a01-9f6e-17b960acf1ab:4 ' 00:16:21.161 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:21.161 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dadef4c9-1496-4397-95c2-d8d099b5df84 lbd_6 10 00:16:21.419 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=703d1ded-c441-4804-b4d4-b46d9ec2d72f 00:16:21.419 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='703d1ded-c441-4804-b4d4-b46d9ec2d72f:5 ' 00:16:21.419 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:21.419 05:05:35 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dadef4c9-1496-4397-95c2-d8d099b5df84 lbd_7 10 00:16:21.677 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=bdd22440-7c59-4b21-888d-4c0a7ae1df64 00:16:21.677 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='bdd22440-7c59-4b21-888d-4c0a7ae1df64:6 ' 00:16:21.677 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:21.677 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dadef4c9-1496-4397-95c2-d8d099b5df84 lbd_8 10 00:16:21.677 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b814987b-65e0-4c25-b2e4-484c88c40874 00:16:21.677 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b814987b-65e0-4c25-b2e4-484c88c40874:7 ' 00:16:21.677 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:21.677 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dadef4c9-1496-4397-95c2-d8d099b5df84 lbd_9 10 00:16:21.935 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a122ede5-24d7-4426-9573-c4836c63c8ba 00:16:21.935 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a122ede5-24d7-4426-9573-c4836c63c8ba:8 ' 00:16:21.935 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:21.935 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dadef4c9-1496-4397-95c2-d8d099b5df84 lbd_10 10 00:16:22.193 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7b4db0fb-40d2-43f9-8846-f05fb16b0044 00:16:22.194 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7b4db0fb-40d2-43f9-8846-f05fb16b0044:9 ' 00:16:22.194 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias 'fb412979-6d78-4a27-a576-ec3e489f4380:0 e9e77f70-b8a9-4746-b13b-615f4db06df6:1 83bab3f6-4593-4c0c-8a02-1e71940b22d9:2 449c65e2-a0e8-4fab-8c49-a636c4cd4f42:3 04c7891a-d08b-4a01-9f6e-17b960acf1ab:4 703d1ded-c441-4804-b4d4-b46d9ec2d72f:5 bdd22440-7c59-4b21-888d-4c0a7ae1df64:6 b814987b-65e0-4c25-b2e4-484c88c40874:7 a122ede5-24d7-4426-9573-c4836c63c8ba:8 7b4db0fb-40d2-43f9-8846-f05fb16b0044:9 ' 1:8 256 -d 00:16:22.452 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:16:22.452 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=9 00:16:22.452 05:05:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 9 ANY 10.0.0.2/32 00:16:22.452 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 7 -eq 1 ']' 00:16:22.452 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:16:23.018 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc7 00:16:23.018 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc7 lvs_7 -c 1048576 00:16:23.018 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=55673dd3-99cd-4d7c-881b-09ad360778c6 00:16:23.018 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:16:23.018 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:16:23.018 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:23.018 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55673dd3-99cd-4d7c-881b-09ad360778c6 lbd_1 10 00:16:23.276 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d8403a33-acfc-4686-beb7-e1fa1f4e88b2 00:16:23.276 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d8403a33-acfc-4686-beb7-e1fa1f4e88b2:0 ' 00:16:23.276 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:23.276 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55673dd3-99cd-4d7c-881b-09ad360778c6 lbd_2 10 00:16:23.535 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5c9b22b1-2576-4173-b28d-8c147970e8b6 00:16:23.535 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5c9b22b1-2576-4173-b28d-8c147970e8b6:1 ' 00:16:23.535 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:23.535 05:05:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55673dd3-99cd-4d7c-881b-09ad360778c6 lbd_3 10 00:16:23.535 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c7050c00-ed63-4a71-bb01-c8d06e068a45 00:16:23.535 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c7050c00-ed63-4a71-bb01-c8d06e068a45:2 ' 00:16:23.535 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:23.535 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55673dd3-99cd-4d7c-881b-09ad360778c6 lbd_4 10 00:16:23.792 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a0ba56fa-8abc-4361-831b-0ca41c428377 00:16:23.792 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a0ba56fa-8abc-4361-831b-0ca41c428377:3 ' 00:16:23.792 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:23.792 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55673dd3-99cd-4d7c-881b-09ad360778c6 lbd_5 10 00:16:24.049 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=684261fa-4d2d-4c74-8220-74a5ae6606e0 00:16:24.049 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='684261fa-4d2d-4c74-8220-74a5ae6606e0:4 ' 00:16:24.049 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:24.050 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55673dd3-99cd-4d7c-881b-09ad360778c6 lbd_6 10 00:16:24.309 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5a630a57-e4e4-431e-9c58-520d64edbcd3 00:16:24.309 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5a630a57-e4e4-431e-9c58-520d64edbcd3:5 ' 00:16:24.309 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:24.309 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55673dd3-99cd-4d7c-881b-09ad360778c6 lbd_7 10 00:16:24.309 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=908238ce-48a7-4f2e-8f60-6a710f7638bc 00:16:24.309 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='908238ce-48a7-4f2e-8f60-6a710f7638bc:6 ' 00:16:24.309 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:24.309 05:05:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55673dd3-99cd-4d7c-881b-09ad360778c6 lbd_8 10 00:16:24.567 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=73052164-38b9-4fb2-95a0-126b8860925f 00:16:24.567 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='73052164-38b9-4fb2-95a0-126b8860925f:7 ' 00:16:24.567 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:24.567 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55673dd3-99cd-4d7c-881b-09ad360778c6 lbd_9 10 00:16:24.825 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=768ed368-1a00-405d-a46d-cf0859147c5a 00:16:24.825 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='768ed368-1a00-405d-a46d-cf0859147c5a:8 ' 00:16:24.825 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:24.825 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55673dd3-99cd-4d7c-881b-09ad360778c6 lbd_10 10 00:16:24.825 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0b532680-dbce-4d56-b92a-f010c072b86b 00:16:24.825 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0b532680-dbce-4d56-b92a-f010c072b86b:9 ' 00:16:24.825 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias 'd8403a33-acfc-4686-beb7-e1fa1f4e88b2:0 5c9b22b1-2576-4173-b28d-8c147970e8b6:1 c7050c00-ed63-4a71-bb01-c8d06e068a45:2 a0ba56fa-8abc-4361-831b-0ca41c428377:3 684261fa-4d2d-4c74-8220-74a5ae6606e0:4 5a630a57-e4e4-431e-9c58-520d64edbcd3:5 908238ce-48a7-4f2e-8f60-6a710f7638bc:6 73052164-38b9-4fb2-95a0-126b8860925f:7 768ed368-1a00-405d-a46d-cf0859147c5a:8 0b532680-dbce-4d56-b92a-f010c072b86b:9 ' 1:9 256 -d 00:16:25.082 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:16:25.082 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=10 00:16:25.082 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 10 ANY 10.0.0.2/32 00:16:25.339 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 8 -eq 1 ']' 00:16:25.339 05:05:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:16:25.597 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc8 00:16:25.597 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc8 lvs_8 -c 1048576 00:16:25.854 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=aac3c1de-a9c7-4b92-b1fe-352111b76498 00:16:25.854 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:16:25.854 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:16:25.854 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:25.854 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aac3c1de-a9c7-4b92-b1fe-352111b76498 lbd_1 10 00:16:26.111 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=27f0c0f7-4302-434e-9de3-ea5d75e94843 00:16:26.111 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='27f0c0f7-4302-434e-9de3-ea5d75e94843:0 ' 00:16:26.111 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:26.111 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aac3c1de-a9c7-4b92-b1fe-352111b76498 lbd_2 10 00:16:26.369 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d8937ad8-f9fa-4508-8cf3-94e58ac85bcd 00:16:26.369 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d8937ad8-f9fa-4508-8cf3-94e58ac85bcd:1 ' 00:16:26.369 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:26.369 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aac3c1de-a9c7-4b92-b1fe-352111b76498 lbd_3 10 00:16:26.369 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=6df9b29d-9ce5-4e57-90bb-c18c71070a34 00:16:26.369 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='6df9b29d-9ce5-4e57-90bb-c18c71070a34:2 ' 00:16:26.369 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:26.369 05:05:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aac3c1de-a9c7-4b92-b1fe-352111b76498 lbd_4 10 00:16:26.626 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=acd226d7-c854-426c-a778-56c58ca24d14 00:16:26.626 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='acd226d7-c854-426c-a778-56c58ca24d14:3 ' 00:16:26.626 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:26.626 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aac3c1de-a9c7-4b92-b1fe-352111b76498 lbd_5 10 00:16:26.884 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f50dd900-4d5f-4e07-be1f-97602aca260c 00:16:26.884 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f50dd900-4d5f-4e07-be1f-97602aca260c:4 ' 00:16:26.884 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:26.884 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aac3c1de-a9c7-4b92-b1fe-352111b76498 lbd_6 10 00:16:26.884 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=130f79d4-9be1-4379-be99-908a2c30856f 00:16:26.884 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='130f79d4-9be1-4379-be99-908a2c30856f:5 ' 00:16:26.884 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:26.884 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aac3c1de-a9c7-4b92-b1fe-352111b76498 lbd_7 10 00:16:27.142 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8e5c148c-8fa0-4fa7-8bd6-7422ffc37d47 00:16:27.142 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8e5c148c-8fa0-4fa7-8bd6-7422ffc37d47:6 ' 00:16:27.142 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:27.142 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aac3c1de-a9c7-4b92-b1fe-352111b76498 lbd_8 10 00:16:27.399 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3c7ebff9-52fe-42a3-9785-fdd03a919970 00:16:27.399 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3c7ebff9-52fe-42a3-9785-fdd03a919970:7 ' 00:16:27.399 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:27.399 05:05:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aac3c1de-a9c7-4b92-b1fe-352111b76498 lbd_9 10 00:16:27.656 05:05:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=73e09c8a-5368-4261-8b13-e5a1eb2c47c0 00:16:27.656 05:05:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='73e09c8a-5368-4261-8b13-e5a1eb2c47c0:8 ' 00:16:27.657 05:05:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:27.657 05:05:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aac3c1de-a9c7-4b92-b1fe-352111b76498 lbd_10 10 00:16:27.657 05:05:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=db888203-23d5-4c02-9db6-d76233fce9d4 00:16:27.657 05:05:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='db888203-23d5-4c02-9db6-d76233fce9d4:9 ' 00:16:27.657 05:05:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias '27f0c0f7-4302-434e-9de3-ea5d75e94843:0 d8937ad8-f9fa-4508-8cf3-94e58ac85bcd:1 6df9b29d-9ce5-4e57-90bb-c18c71070a34:2 acd226d7-c854-426c-a778-56c58ca24d14:3 f50dd900-4d5f-4e07-be1f-97602aca260c:4 130f79d4-9be1-4379-be99-908a2c30856f:5 8e5c148c-8fa0-4fa7-8bd6-7422ffc37d47:6 3c7ebff9-52fe-42a3-9785-fdd03a919970:7 73e09c8a-5368-4261-8b13-e5a1eb2c47c0:8 db888203-23d5-4c02-9db6-d76233fce9d4:9 ' 1:10 256 -d 00:16:27.914 05:05:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:16:27.914 05:05:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=11 00:16:27.914 05:05:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 11 ANY 10.0.0.2/32 00:16:28.171 05:05:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 9 -eq 1 ']' 00:16:28.171 05:05:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:16:28.429 05:05:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc9 00:16:28.429 05:05:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc9 lvs_9 -c 1048576 00:16:28.687 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=4e043dcb-21a8-4767-b45c-886ccc77871f 00:16:28.687 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:16:28.687 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:16:28.687 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:28.687 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4e043dcb-21a8-4767-b45c-886ccc77871f lbd_1 10 00:16:28.944 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d805f454-bed5-4cd0-b4e2-ecd7b910238a 00:16:28.944 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d805f454-bed5-4cd0-b4e2-ecd7b910238a:0 ' 00:16:28.944 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:28.944 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4e043dcb-21a8-4767-b45c-886ccc77871f lbd_2 10 00:16:29.201 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b6aeea0d-0e58-4c62-acfa-198952a4dbd9 00:16:29.201 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b6aeea0d-0e58-4c62-acfa-198952a4dbd9:1 ' 00:16:29.201 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:29.201 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4e043dcb-21a8-4767-b45c-886ccc77871f lbd_3 10 00:16:29.201 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=eb471c06-6305-47f5-be34-68194da12c5c 00:16:29.201 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='eb471c06-6305-47f5-be34-68194da12c5c:2 ' 00:16:29.201 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:29.201 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4e043dcb-21a8-4767-b45c-886ccc77871f lbd_4 10 00:16:29.458 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e8be4af9-451d-402b-9bfc-5de1a07cd996 00:16:29.458 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e8be4af9-451d-402b-9bfc-5de1a07cd996:3 ' 00:16:29.458 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:29.458 05:05:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4e043dcb-21a8-4767-b45c-886ccc77871f lbd_5 10 00:16:29.715 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e2070cf6-b553-40e6-949e-4f52a8cea445 00:16:29.715 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e2070cf6-b553-40e6-949e-4f52a8cea445:4 ' 00:16:29.715 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:29.715 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4e043dcb-21a8-4767-b45c-886ccc77871f lbd_6 10 00:16:29.972 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9598e452-5797-4fe7-a829-1808c155d205 00:16:29.972 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9598e452-5797-4fe7-a829-1808c155d205:5 ' 00:16:29.972 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:29.973 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4e043dcb-21a8-4767-b45c-886ccc77871f lbd_7 10 00:16:30.230 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ec1c1f8d-521c-40f2-b646-40c306899dac 00:16:30.230 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ec1c1f8d-521c-40f2-b646-40c306899dac:6 ' 00:16:30.230 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:30.230 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4e043dcb-21a8-4767-b45c-886ccc77871f lbd_8 10 00:16:30.230 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f9cb05db-c6da-41d4-a12d-2a4a4634c583 00:16:30.230 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f9cb05db-c6da-41d4-a12d-2a4a4634c583:7 ' 00:16:30.230 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:30.230 05:05:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4e043dcb-21a8-4767-b45c-886ccc77871f lbd_9 10 00:16:30.488 05:05:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=45b898d9-d00b-4ee0-a90d-f0fc180014f7 00:16:30.488 05:05:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='45b898d9-d00b-4ee0-a90d-f0fc180014f7:8 ' 00:16:30.488 05:05:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:30.488 05:05:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4e043dcb-21a8-4767-b45c-886ccc77871f lbd_10 10 00:16:30.745 05:05:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=25034b6f-f3ab-4c75-8618-ce5885b06ae9 00:16:30.745 05:05:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='25034b6f-f3ab-4c75-8618-ce5885b06ae9:9 ' 00:16:30.746 05:05:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias 'd805f454-bed5-4cd0-b4e2-ecd7b910238a:0 b6aeea0d-0e58-4c62-acfa-198952a4dbd9:1 eb471c06-6305-47f5-be34-68194da12c5c:2 e8be4af9-451d-402b-9bfc-5de1a07cd996:3 e2070cf6-b553-40e6-949e-4f52a8cea445:4 9598e452-5797-4fe7-a829-1808c155d205:5 ec1c1f8d-521c-40f2-b646-40c306899dac:6 f9cb05db-c6da-41d4-a12d-2a4a4634c583:7 45b898d9-d00b-4ee0-a90d-f0fc180014f7:8 25034b6f-f3ab-4c75-8618-ce5885b06ae9:9 ' 1:11 256 -d 00:16:31.003 05:05:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:16:31.003 05:05:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=12 00:16:31.003 05:05:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 12 ANY 10.0.0.2/32 00:16:31.003 05:05:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 10 -eq 1 ']' 00:16:31.003 05:05:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:16:31.569 05:05:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc10 00:16:31.569 05:05:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc10 lvs_10 -c 1048576 00:16:31.569 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=54ff4301-5da9-4acf-b808-dcedfffdce2b 00:16:31.569 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:16:31.569 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:16:31.569 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:31.569 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 54ff4301-5da9-4acf-b808-dcedfffdce2b lbd_1 10 00:16:31.827 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ae3cd922-62f7-4257-afb1-7e646396687c 00:16:31.827 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ae3cd922-62f7-4257-afb1-7e646396687c:0 ' 00:16:31.827 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:31.827 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 54ff4301-5da9-4acf-b808-dcedfffdce2b lbd_2 10 00:16:32.085 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=be59a277-7970-4b76-95b8-41b0a1b0e422 00:16:32.085 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='be59a277-7970-4b76-95b8-41b0a1b0e422:1 ' 00:16:32.085 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:32.085 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 54ff4301-5da9-4acf-b808-dcedfffdce2b lbd_3 10 00:16:32.343 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4e3226af-e1c6-48f8-81cf-d4d28faca252 00:16:32.343 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4e3226af-e1c6-48f8-81cf-d4d28faca252:2 ' 00:16:32.343 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:32.343 05:05:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 54ff4301-5da9-4acf-b808-dcedfffdce2b lbd_4 10 00:16:32.602 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=446c4d0c-9a3d-4e36-952a-5fa54e995933 00:16:32.602 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='446c4d0c-9a3d-4e36-952a-5fa54e995933:3 ' 00:16:32.602 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:32.602 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 54ff4301-5da9-4acf-b808-dcedfffdce2b lbd_5 10 00:16:32.860 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=521d5a8b-d4d8-4a5a-ac0f-b2838d8e798b 00:16:32.860 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='521d5a8b-d4d8-4a5a-ac0f-b2838d8e798b:4 ' 00:16:32.860 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:32.860 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 54ff4301-5da9-4acf-b808-dcedfffdce2b lbd_6 10 00:16:32.860 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=df04e0b7-9166-4290-b1a3-66a4988b5516 00:16:32.860 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='df04e0b7-9166-4290-b1a3-66a4988b5516:5 ' 00:16:32.860 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:32.860 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 54ff4301-5da9-4acf-b808-dcedfffdce2b lbd_7 10 00:16:33.119 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c448ba1e-4190-4ba0-a7d6-cdbe14affeef 00:16:33.119 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c448ba1e-4190-4ba0-a7d6-cdbe14affeef:6 ' 00:16:33.119 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:33.119 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 54ff4301-5da9-4acf-b808-dcedfffdce2b lbd_8 10 00:16:33.377 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4181f668-23a5-45e5-8742-ddd1a1ab391b 00:16:33.377 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4181f668-23a5-45e5-8742-ddd1a1ab391b:7 ' 00:16:33.377 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:33.377 05:05:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 54ff4301-5da9-4acf-b808-dcedfffdce2b lbd_9 10 00:16:33.635 05:05:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=bf930c97-9a4f-48cc-a382-562185dbf9fb 00:16:33.635 05:05:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='bf930c97-9a4f-48cc-a382-562185dbf9fb:8 ' 00:16:33.635 05:05:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:16:33.635 05:05:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 54ff4301-5da9-4acf-b808-dcedfffdce2b lbd_10 10 00:16:33.635 05:05:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7867a6f3-446a-485b-8b98-c8e3183c4f1c 00:16:33.635 05:05:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7867a6f3-446a-485b-8b98-c8e3183c4f1c:9 ' 00:16:33.635 05:05:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias 'ae3cd922-62f7-4257-afb1-7e646396687c:0 be59a277-7970-4b76-95b8-41b0a1b0e422:1 4e3226af-e1c6-48f8-81cf-d4d28faca252:2 446c4d0c-9a3d-4e36-952a-5fa54e995933:3 521d5a8b-d4d8-4a5a-ac0f-b2838d8e798b:4 df04e0b7-9166-4290-b1a3-66a4988b5516:5 c448ba1e-4190-4ba0-a7d6-cdbe14affeef:6 4181f668-23a5-45e5-8742-ddd1a1ab391b:7 bf930c97-9a4f-48cc-a382-562185dbf9fb:8 7867a6f3-446a-485b-8b98-c8e3183c4f1c:9 ' 1:12 256 -d 00:16:33.893 05:05:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@66 -- # timing_exit setup 00:16:33.894 05:05:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:33.894 05:05:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:33.894 05:05:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@68 -- # sleep 1 00:16:35.279 05:05:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@70 -- # timing_enter discovery 00:16:35.279 05:05:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:35.279 05:05:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:35.279 05:05:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@71 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:16:35.279 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:16:35.279 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:16:35.279 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:16:35.279 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:16:35.279 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:16:35.279 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:16:35.279 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:16:35.279 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:16:35.279 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:16:35.279 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:16:35.279 05:05:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@72 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:16:35.279 [2024-07-24 05:05:49.579273] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.589330] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.604482] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.615673] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.645024] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.657219] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.657648] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.657986] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.690875] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.709354] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.723322] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.747553] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.757662] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.790881] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.796190] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.800735] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.803182] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.807304] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.842081] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.848786] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.279 [2024-07-24 05:05:49.861855] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:49.903213] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:49.909812] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:49.912222] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:49.918161] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:49.936156] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:49.938883] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:49.945566] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:49.967132] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:49.982401] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:49.983169] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:50.008646] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:50.036747] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:50.037489] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:50.051880] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:50.107343] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:50.111943] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:50.119209] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:50.121666] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:50.128907] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:50.156507] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:50.157355] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:50.158376] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.568 [2024-07-24 05:05:50.197616] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.826 [2024-07-24 05:05:50.228163] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.827 [2024-07-24 05:05:50.232769] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.827 [2024-07-24 05:05:50.304829] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.827 [2024-07-24 05:05:50.318517] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.827 [2024-07-24 05:05:50.377628] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.827 [2024-07-24 05:05:50.379843] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.827 [2024-07-24 05:05:50.398592] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:35.827 [2024-07-24 05:05:50.447378] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.085 [2024-07-24 05:05:50.526108] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.085 [2024-07-24 05:05:50.598489] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.085 [2024-07-24 05:05:50.607009] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.085 [2024-07-24 05:05:50.607439] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.085 [2024-07-24 05:05:50.648108] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.085 [2024-07-24 05:05:50.657294] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.085 [2024-07-24 05:05:50.690713] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.085 [2024-07-24 05:05:50.701004] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.085 [2024-07-24 05:05:50.710210] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.343 [2024-07-24 05:05:50.731118] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.343 [2024-07-24 05:05:50.758413] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.343 [2024-07-24 05:05:50.803257] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.343 [2024-07-24 05:05:50.818830] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.343 [2024-07-24 05:05:50.852353] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.343 [2024-07-24 05:05:50.907229] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.343 [2024-07-24 05:05:50.907686] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.343 [2024-07-24 05:05:50.951500] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.343 [2024-07-24 05:05:50.951963] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.343 [2024-07-24 05:05:50.956116] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:50.980266] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.007035] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.030302] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.043522] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.060180] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.066707] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.073987] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.105566] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.112624] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.112939] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.121493] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.140379] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.142140] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.151921] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.152279] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.175015] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.193099] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.213017] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.218115] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.601 [2024-07-24 05:05:51.230982] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.860 [2024-07-24 05:05:51.239849] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.860 [2024-07-24 05:05:51.251172] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.860 [2024-07-24 05:05:51.258931] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.860 [2024-07-24 05:05:51.291946] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.860 [2024-07-24 05:05:51.307434] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.860 [2024-07-24 05:05:51.312972] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.860 [2024-07-24 05:05:51.331673] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.860 [2024-07-24 05:05:51.350366] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.860 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:16:36.860 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:16:36.860 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:16:36.860 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:16:36.860 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:16:36.860 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:16:36.860 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:16:36.860 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:16:36.860 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:16:36.860 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:16:36.860 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:16:36.860 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:16:36.860 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:16:36.860 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:16:36.860 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:16:36.860 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:16:36.860 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:16:36.860 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:16:36.860 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:16:36.860 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:16:36.860 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@73 -- # waitforiscsidevices 100 00:16:36.860 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@116 -- # local num=100 00:16:36.860 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:16:36.860 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:16:36.860 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:16:36.860 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:16:36.860 [2024-07-24 05:05:51.457870] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:36.860 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # n=100 00:16:36.860 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@120 -- # '[' 100 -ne 100 ']' 00:16:36.860 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@123 -- # return 0 00:16:36.860 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@74 -- # timing_exit discovery 00:16:36.860 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:36.860 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:37.118 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@76 -- # timing_enter fio 00:16:37.118 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:37.118 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:37.118 05:05:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 8 -t randwrite -r 10 -v 00:16:37.118 [global] 00:16:37.118 thread=1 00:16:37.118 invalidate=1 00:16:37.118 rw=randwrite 00:16:37.118 time_based=1 00:16:37.118 runtime=10 00:16:37.118 ioengine=libaio 00:16:37.118 direct=1 00:16:37.118 bs=131072 00:16:37.118 iodepth=8 00:16:37.118 norandommap=0 00:16:37.118 numjobs=1 00:16:37.118 00:16:37.118 verify_dump=1 00:16:37.118 verify_backlog=512 00:16:37.118 verify_state_save=0 00:16:37.118 do_verify=1 00:16:37.118 verify=crc32c-intel 00:16:37.118 [job0] 00:16:37.118 filename=/dev/sdc 00:16:37.118 [job1] 00:16:37.118 filename=/dev/sde 00:16:37.118 [job2] 00:16:37.118 filename=/dev/sdh 00:16:37.118 [job3] 00:16:37.118 filename=/dev/sdj 00:16:37.118 [job4] 00:16:37.118 filename=/dev/sdp 00:16:37.118 [job5] 00:16:37.118 filename=/dev/sdt 00:16:37.119 [job6] 00:16:37.119 filename=/dev/sdy 00:16:37.119 [job7] 00:16:37.119 filename=/dev/sdab 00:16:37.119 [job8] 00:16:37.119 filename=/dev/sdad 00:16:37.119 [job9] 00:16:37.119 filename=/dev/sdaf 00:16:37.119 [job10] 00:16:37.119 filename=/dev/sdf 00:16:37.119 [job11] 00:16:37.119 filename=/dev/sdm 00:16:37.119 [job12] 00:16:37.119 filename=/dev/sdr 00:16:37.119 [job13] 00:16:37.119 filename=/dev/sdv 00:16:37.119 [job14] 00:16:37.119 filename=/dev/sdaa 00:16:37.119 [job15] 00:16:37.119 filename=/dev/sdae 00:16:37.119 [job16] 00:16:37.119 filename=/dev/sdag 00:16:37.119 [job17] 00:16:37.119 filename=/dev/sdah 00:16:37.119 [job18] 00:16:37.119 filename=/dev/sdak 00:16:37.119 [job19] 00:16:37.119 filename=/dev/sdam 00:16:37.119 [job20] 00:16:37.119 filename=/dev/sdk 00:16:37.119 [job21] 00:16:37.119 filename=/dev/sdo 00:16:37.119 [job22] 00:16:37.119 filename=/dev/sds 00:16:37.119 [job23] 00:16:37.119 filename=/dev/sdw 00:16:37.119 [job24] 00:16:37.119 filename=/dev/sdz 00:16:37.119 [job25] 00:16:37.119 filename=/dev/sdac 00:16:37.119 [job26] 00:16:37.119 filename=/dev/sdai 00:16:37.119 [job27] 00:16:37.119 filename=/dev/sdaj 00:16:37.119 [job28] 00:16:37.119 filename=/dev/sdal 00:16:37.119 [job29] 00:16:37.119 filename=/dev/sdaq 00:16:37.119 [job30] 00:16:37.119 filename=/dev/sdan 00:16:37.119 [job31] 00:16:37.119 filename=/dev/sdao 00:16:37.119 [job32] 00:16:37.119 filename=/dev/sdap 00:16:37.119 [job33] 00:16:37.119 filename=/dev/sdar 00:16:37.119 [job34] 00:16:37.119 filename=/dev/sdas 00:16:37.119 [job35] 00:16:37.119 filename=/dev/sdat 00:16:37.119 [job36] 00:16:37.119 filename=/dev/sdau 00:16:37.119 [job37] 00:16:37.119 filename=/dev/sdav 00:16:37.119 [job38] 00:16:37.119 filename=/dev/sdaw 00:16:37.119 [job39] 00:16:37.119 filename=/dev/sday 00:16:37.119 [job40] 00:16:37.119 filename=/dev/sdax 00:16:37.119 [job41] 00:16:37.119 filename=/dev/sdaz 00:16:37.119 [job42] 00:16:37.119 filename=/dev/sdba 00:16:37.119 [job43] 00:16:37.119 filename=/dev/sdbb 00:16:37.119 [job44] 00:16:37.119 filename=/dev/sdbc 00:16:37.119 [job45] 00:16:37.119 filename=/dev/sdbd 00:16:37.119 [job46] 00:16:37.119 filename=/dev/sdbe 00:16:37.119 [job47] 00:16:37.119 filename=/dev/sdbf 00:16:37.119 [job48] 00:16:37.119 filename=/dev/sdbh 00:16:37.119 [job49] 00:16:37.119 filename=/dev/sdbj 00:16:37.119 [job50] 00:16:37.119 filename=/dev/sdbg 00:16:37.119 [job51] 00:16:37.119 filename=/dev/sdbi 00:16:37.119 [job52] 00:16:37.119 filename=/dev/sdbk 00:16:37.119 [job53] 00:16:37.119 filename=/dev/sdbl 00:16:37.119 [job54] 00:16:37.119 filename=/dev/sdbm 00:16:37.119 [job55] 00:16:37.119 filename=/dev/sdbn 00:16:37.119 [job56] 00:16:37.119 filename=/dev/sdbo 00:16:37.119 [job57] 00:16:37.119 filename=/dev/sdbp 00:16:37.119 [job58] 00:16:37.119 filename=/dev/sdbq 00:16:37.119 [job59] 00:16:37.119 filename=/dev/sdbr 00:16:37.119 [job60] 00:16:37.119 filename=/dev/sdbs 00:16:37.119 [job61] 00:16:37.119 filename=/dev/sdbt 00:16:37.119 [job62] 00:16:37.119 filename=/dev/sdbu 00:16:37.119 [job63] 00:16:37.119 filename=/dev/sdbv 00:16:37.119 [job64] 00:16:37.119 filename=/dev/sdby 00:16:37.119 [job65] 00:16:37.119 filename=/dev/sdcc 00:16:37.119 [job66] 00:16:37.119 filename=/dev/sdcg 00:16:37.119 [job67] 00:16:37.119 filename=/dev/sdci 00:16:37.119 [job68] 00:16:37.119 filename=/dev/sdcl 00:16:37.119 [job69] 00:16:37.119 filename=/dev/sdcn 00:16:37.119 [job70] 00:16:37.119 filename=/dev/sdbx 00:16:37.119 [job71] 00:16:37.119 filename=/dev/sdbz 00:16:37.119 [job72] 00:16:37.119 filename=/dev/sdcb 00:16:37.119 [job73] 00:16:37.119 filename=/dev/sdce 00:16:37.119 [job74] 00:16:37.119 filename=/dev/sdcj 00:16:37.119 [job75] 00:16:37.119 filename=/dev/sdcm 00:16:37.119 [job76] 00:16:37.119 filename=/dev/sdcp 00:16:37.119 [job77] 00:16:37.119 filename=/dev/sdcs 00:16:37.119 [job78] 00:16:37.119 filename=/dev/sdcu 00:16:37.119 [job79] 00:16:37.119 filename=/dev/sdcv 00:16:37.119 [job80] 00:16:37.119 filename=/dev/sdbw 00:16:37.119 [job81] 00:16:37.119 filename=/dev/sdca 00:16:37.119 [job82] 00:16:37.119 filename=/dev/sdcd 00:16:37.119 [job83] 00:16:37.119 filename=/dev/sdcf 00:16:37.119 [job84] 00:16:37.119 filename=/dev/sdch 00:16:37.119 [job85] 00:16:37.119 filename=/dev/sdck 00:16:37.377 [job86] 00:16:37.377 filename=/dev/sdco 00:16:37.377 [job87] 00:16:37.377 filename=/dev/sdcq 00:16:37.377 [job88] 00:16:37.377 filename=/dev/sdcr 00:16:37.377 [job89] 00:16:37.377 filename=/dev/sdct 00:16:37.377 [job90] 00:16:37.377 filename=/dev/sda 00:16:37.377 [job91] 00:16:37.377 filename=/dev/sdb 00:16:37.377 [job92] 00:16:37.377 filename=/dev/sdd 00:16:37.377 [job93] 00:16:37.377 filename=/dev/sdg 00:16:37.377 [job94] 00:16:37.377 filename=/dev/sdi 00:16:37.377 [job95] 00:16:37.377 filename=/dev/sdl 00:16:37.377 [job96] 00:16:37.377 filename=/dev/sdn 00:16:37.377 [job97] 00:16:37.377 filename=/dev/sdq 00:16:37.377 [job98] 00:16:37.377 filename=/dev/sdu 00:16:37.377 [job99] 00:16:37.377 filename=/dev/sdx 00:16:38.751 queue_depth set to 113 (sdc) 00:16:38.751 queue_depth set to 113 (sde) 00:16:38.751 queue_depth set to 113 (sdh) 00:16:38.751 queue_depth set to 113 (sdj) 00:16:38.751 queue_depth set to 113 (sdp) 00:16:38.751 queue_depth set to 113 (sdt) 00:16:38.751 queue_depth set to 113 (sdy) 00:16:38.751 queue_depth set to 113 (sdab) 00:16:39.008 queue_depth set to 113 (sdad) 00:16:39.008 queue_depth set to 113 (sdaf) 00:16:39.008 queue_depth set to 113 (sdf) 00:16:39.008 queue_depth set to 113 (sdm) 00:16:39.008 queue_depth set to 113 (sdr) 00:16:39.008 queue_depth set to 113 (sdv) 00:16:39.008 queue_depth set to 113 (sdaa) 00:16:39.008 queue_depth set to 113 (sdae) 00:16:39.008 queue_depth set to 113 (sdag) 00:16:39.008 queue_depth set to 113 (sdah) 00:16:39.008 queue_depth set to 113 (sdak) 00:16:39.008 queue_depth set to 113 (sdam) 00:16:39.008 queue_depth set to 113 (sdk) 00:16:39.265 queue_depth set to 113 (sdo) 00:16:39.265 queue_depth set to 113 (sds) 00:16:39.265 queue_depth set to 113 (sdw) 00:16:39.265 queue_depth set to 113 (sdz) 00:16:39.265 queue_depth set to 113 (sdac) 00:16:39.265 queue_depth set to 113 (sdai) 00:16:39.265 queue_depth set to 113 (sdaj) 00:16:39.265 queue_depth set to 113 (sdal) 00:16:39.265 queue_depth set to 113 (sdaq) 00:16:39.265 queue_depth set to 113 (sdan) 00:16:39.265 queue_depth set to 113 (sdao) 00:16:39.265 queue_depth set to 113 (sdap) 00:16:39.522 queue_depth set to 113 (sdar) 00:16:39.522 queue_depth set to 113 (sdas) 00:16:39.522 queue_depth set to 113 (sdat) 00:16:39.522 queue_depth set to 113 (sdau) 00:16:39.522 queue_depth set to 113 (sdav) 00:16:39.522 queue_depth set to 113 (sdaw) 00:16:39.522 queue_depth set to 113 (sday) 00:16:39.522 queue_depth set to 113 (sdax) 00:16:39.522 queue_depth set to 113 (sdaz) 00:16:39.522 queue_depth set to 113 (sdba) 00:16:39.522 queue_depth set to 113 (sdbb) 00:16:39.780 queue_depth set to 113 (sdbc) 00:16:39.780 queue_depth set to 113 (sdbd) 00:16:39.780 queue_depth set to 113 (sdbe) 00:16:39.780 queue_depth set to 113 (sdbf) 00:16:39.780 queue_depth set to 113 (sdbh) 00:16:39.780 queue_depth set to 113 (sdbj) 00:16:39.780 queue_depth set to 113 (sdbg) 00:16:39.780 queue_depth set to 113 (sdbi) 00:16:39.780 queue_depth set to 113 (sdbk) 00:16:39.780 queue_depth set to 113 (sdbl) 00:16:39.780 queue_depth set to 113 (sdbm) 00:16:39.780 queue_depth set to 113 (sdbn) 00:16:39.780 queue_depth set to 113 (sdbo) 00:16:40.037 queue_depth set to 113 (sdbp) 00:16:40.037 queue_depth set to 113 (sdbq) 00:16:40.037 queue_depth set to 113 (sdbr) 00:16:40.037 queue_depth set to 113 (sdbs) 00:16:40.037 queue_depth set to 113 (sdbt) 00:16:40.037 queue_depth set to 113 (sdbu) 00:16:40.037 queue_depth set to 113 (sdbv) 00:16:40.037 queue_depth set to 113 (sdby) 00:16:40.037 queue_depth set to 113 (sdcc) 00:16:40.037 queue_depth set to 113 (sdcg) 00:16:40.037 queue_depth set to 113 (sdci) 00:16:40.037 queue_depth set to 113 (sdcl) 00:16:40.037 queue_depth set to 113 (sdcn) 00:16:40.295 queue_depth set to 113 (sdbx) 00:16:40.295 queue_depth set to 113 (sdbz) 00:16:40.295 queue_depth set to 113 (sdcb) 00:16:40.295 queue_depth set to 113 (sdce) 00:16:40.295 queue_depth set to 113 (sdcj) 00:16:40.295 queue_depth set to 113 (sdcm) 00:16:40.295 queue_depth set to 113 (sdcp) 00:16:40.295 queue_depth set to 113 (sdcs) 00:16:40.295 queue_depth set to 113 (sdcu) 00:16:40.295 queue_depth set to 113 (sdcv) 00:16:40.295 queue_depth set to 113 (sdbw) 00:16:40.295 queue_depth set to 113 (sdca) 00:16:40.552 queue_depth set to 113 (sdcd) 00:16:40.552 queue_depth set to 113 (sdcf) 00:16:40.552 queue_depth set to 113 (sdch) 00:16:40.552 queue_depth set to 113 (sdck) 00:16:40.552 queue_depth set to 113 (sdco) 00:16:40.552 queue_depth set to 113 (sdcq) 00:16:40.552 queue_depth set to 113 (sdcr) 00:16:40.552 queue_depth set to 113 (sdct) 00:16:40.552 queue_depth set to 113 (sda) 00:16:40.552 queue_depth set to 113 (sdb) 00:16:40.552 queue_depth set to 113 (sdd) 00:16:40.552 queue_depth set to 113 (sdg) 00:16:40.810 queue_depth set to 113 (sdi) 00:16:40.810 queue_depth set to 113 (sdl) 00:16:40.810 queue_depth set to 113 (sdn) 00:16:40.810 queue_depth set to 113 (sdq) 00:16:40.810 queue_depth set to 113 (sdu) 00:16:40.810 queue_depth set to 113 (sdx) 00:16:40.810 job0: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job1: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job2: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job3: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job4: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job5: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job6: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job7: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job8: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job9: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job10: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job11: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job12: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job13: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job14: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job15: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job16: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job17: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job18: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job19: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job20: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job21: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job22: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job23: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job24: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job25: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job26: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job27: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job28: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job29: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job30: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job31: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job32: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job33: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:40.810 job34: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job35: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job36: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job37: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job38: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job39: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job40: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job41: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job42: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job43: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job44: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job45: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job46: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job47: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job48: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job49: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job50: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job51: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job52: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job53: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job54: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job55: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job56: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job57: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job58: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job59: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job60: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job61: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job62: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job63: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job64: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job65: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job66: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job67: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job68: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job69: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.068 job70: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.325 job71: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.325 job72: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.325 job73: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.325 job74: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.325 job75: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.325 job76: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.325 job77: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.325 job78: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.325 job79: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.325 job80: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.325 job81: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.325 job82: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.325 job83: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.325 job84: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job85: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job86: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job87: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job88: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job89: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job90: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job91: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job92: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job93: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job94: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job95: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job96: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job97: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job98: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 job99: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:16:41.326 fio-3.35 00:16:41.326 Starting 100 threads 00:16:41.326 [2024-07-24 05:05:55.701081] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.704791] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.708243] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.711703] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.714240] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.716222] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.718311] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.720699] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.722619] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.725234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.727567] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.729815] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.732503] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.734684] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.737231] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.739447] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.741605] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.743950] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.746490] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.749029] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.751446] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.754279] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.757100] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.759845] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.762767] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.765844] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.772805] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.775102] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.777885] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.781648] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.783846] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.786337] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.788319] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.790714] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.793187] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.795733] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.797822] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.800170] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.802709] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.805218] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.807474] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.809778] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.812219] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.815212] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.817755] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.821520] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.824662] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.827500] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.830527] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.834080] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.837600] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.842339] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.847181] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.851839] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.855071] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.858213] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.860845] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.863612] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.866457] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.869193] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.874428] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.877595] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.880087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.883851] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.887074] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.889948] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.893142] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.896433] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.899797] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.902895] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.906868] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.910871] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.916434] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.919376] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.922640] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.925089] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.927322] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.929944] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.932158] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.934668] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.937095] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.939319] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.941952] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.944555] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.948015] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.950081] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.952858] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.326 [2024-07-24 05:05:55.956371] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.584 [2024-07-24 05:05:55.958555] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.584 [2024-07-24 05:05:55.962051] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.584 [2024-07-24 05:05:55.964607] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.584 [2024-07-24 05:05:55.966667] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.584 [2024-07-24 05:05:55.968604] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.584 [2024-07-24 05:05:55.970733] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.584 [2024-07-24 05:05:55.972807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.584 [2024-07-24 05:05:55.974917] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.584 [2024-07-24 05:05:55.977055] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.584 [2024-07-24 05:05:55.979162] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.584 [2024-07-24 05:05:55.981267] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.584 [2024-07-24 05:05:55.983645] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:46.852 [2024-07-24 05:06:00.487866] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:46.852 [2024-07-24 05:06:00.695033] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:46.852 [2024-07-24 05:06:00.740005] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:46.852 [2024-07-24 05:06:00.805046] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:46.852 [2024-07-24 05:06:00.886174] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:46.852 [2024-07-24 05:06:00.977691] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:46.852 [2024-07-24 05:06:01.115158] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:46.852 [2024-07-24 05:06:01.223019] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:46.852 [2024-07-24 05:06:01.325974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.110 [2024-07-24 05:06:01.565313] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.110 [2024-07-24 05:06:01.649775] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.110 [2024-07-24 05:06:01.676087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.110 [2024-07-24 05:06:01.714198] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.368 [2024-07-24 05:06:01.817943] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.368 [2024-07-24 05:06:01.918907] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.368 [2024-07-24 05:06:01.987123] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.629 [2024-07-24 05:06:02.051979] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.629 [2024-07-24 05:06:02.148176] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.629 [2024-07-24 05:06:02.247961] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.907 [2024-07-24 05:06:02.293772] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.907 [2024-07-24 05:06:02.338357] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.907 [2024-07-24 05:06:02.381028] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.907 [2024-07-24 05:06:02.454409] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:47.907 [2024-07-24 05:06:02.530175] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.165 [2024-07-24 05:06:02.612772] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.165 [2024-07-24 05:06:02.665061] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.165 [2024-07-24 05:06:02.714975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.165 [2024-07-24 05:06:02.762732] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.423 [2024-07-24 05:06:02.805418] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.423 [2024-07-24 05:06:02.870413] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.423 [2024-07-24 05:06:02.931330] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.423 [2024-07-24 05:06:02.959588] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.423 [2024-07-24 05:06:02.995383] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.423 [2024-07-24 05:06:03.034822] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.681 [2024-07-24 05:06:03.119243] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.681 [2024-07-24 05:06:03.216976] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.943 [2024-07-24 05:06:03.314743] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.943 [2024-07-24 05:06:03.352835] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.943 [2024-07-24 05:06:03.429483] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.943 [2024-07-24 05:06:03.481987] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:48.943 [2024-07-24 05:06:03.563930] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:49.200 [2024-07-24 05:06:03.633004] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:49.200 [2024-07-24 05:06:03.693183] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:49.200 [2024-07-24 05:06:03.728213] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:49.457 [2024-07-24 05:06:03.889409] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:49.457 [2024-07-24 05:06:03.984804] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:49.457 [2024-07-24 05:06:04.062250] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:49.715 [2024-07-24 05:06:04.131193] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:49.715 [2024-07-24 05:06:04.201695] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:49.715 [2024-07-24 05:06:04.336599] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:49.973 [2024-07-24 05:06:04.394375] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:49.973 [2024-07-24 05:06:04.447658] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:49.973 [2024-07-24 05:06:04.516681] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:50.231 [2024-07-24 05:06:04.615968] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:50.231 [2024-07-24 05:06:04.724006] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:50.231 [2024-07-24 05:06:04.775415] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:50.231 [2024-07-24 05:06:04.833761] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:50.488 [2024-07-24 05:06:04.909173] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:50.488 [2024-07-24 05:06:04.940319] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:50.488 [2024-07-24 05:06:04.995290] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:50.488 [2024-07-24 05:06:05.063793] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:50.746 [2024-07-24 05:06:05.175967] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:50.746 [2024-07-24 05:06:05.223322] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:50.746 [2024-07-24 05:06:05.264711] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:50.747 [2024-07-24 05:06:05.323600] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:51.005 [2024-07-24 05:06:05.389702] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:51.005 [2024-07-24 05:06:05.464874] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:51.264 [2024-07-24 05:06:05.689339] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:51.264 [2024-07-24 05:06:05.774688] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:51.522 [2024-07-24 05:06:05.934023] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:51.522 [2024-07-24 05:06:06.013937] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:51.522 [2024-07-24 05:06:06.135235] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:51.780 [2024-07-24 05:06:06.168498] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:51.780 [2024-07-24 05:06:06.192187] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:51.780 [2024-07-24 05:06:06.239431] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:51.780 [2024-07-24 05:06:06.330435] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:52.039 [2024-07-24 05:06:06.496781] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:52.039 [2024-07-24 05:06:06.606784] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:52.039 [2024-07-24 05:06:06.667872] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:52.297 [2024-07-24 05:06:06.802103] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:52.297 [2024-07-24 05:06:06.873329] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:52.297 [2024-07-24 05:06:06.920733] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:52.555 [2024-07-24 05:06:06.974924] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:52.555 [2024-07-24 05:06:07.067175] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:52.555 [2024-07-24 05:06:07.140913] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:52.813 [2024-07-24 05:06:07.222717] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:52.813 [2024-07-24 05:06:07.300825] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:52.813 [2024-07-24 05:06:07.341642] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:52.813 [2024-07-24 05:06:07.415384] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:53.072 [2024-07-24 05:06:07.532982] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:53.072 [2024-07-24 05:06:07.649270] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:53.330 [2024-07-24 05:06:07.708026] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:53.330 [2024-07-24 05:06:07.773515] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:53.330 [2024-07-24 05:06:07.881198] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:53.588 [2024-07-24 05:06:08.003078] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:53.588 [2024-07-24 05:06:08.059483] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:53.588 [2024-07-24 05:06:08.130134] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:53.847 [2024-07-24 05:06:08.270711] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:53.847 [2024-07-24 05:06:08.326206] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:53.847 [2024-07-24 05:06:08.382591] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:53.847 [2024-07-24 05:06:08.471203] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:54.105 [2024-07-24 05:06:08.591622] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:54.105 [2024-07-24 05:06:08.653140] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:54.105 [2024-07-24 05:06:08.735325] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:54.363 [2024-07-24 05:06:08.875445] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:54.363 [2024-07-24 05:06:08.957493] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:54.622 [2024-07-24 05:06:09.087081] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:54.622 [2024-07-24 05:06:09.157507] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:54.622 [2024-07-24 05:06:09.213793] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:54.880 [2024-07-24 05:06:09.299489] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:54.880 [2024-07-24 05:06:09.411381] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:54.880 [2024-07-24 05:06:09.445837] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:54.880 [2024-07-24 05:06:09.509391] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.138 [2024-07-24 05:06:09.578884] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.138 [2024-07-24 05:06:09.703156] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.798016] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.830918] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.852980] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.857097] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.859179] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.862364] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.865555] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.867626] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.869757] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.871992] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.874554] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.877445] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.879709] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.882305] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.884479] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.886840] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.888937] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.891970] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.894174] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.896362] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.406 [2024-07-24 05:06:09.899790] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.407 [2024-07-24 05:06:09.903591] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.407 [2024-07-24 05:06:09.906212] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.407 [2024-07-24 05:06:09.908273] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.407 [2024-07-24 05:06:09.910256] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.407 [2024-07-24 05:06:09.912202] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.407 [2024-07-24 05:06:09.914363] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.407 [2024-07-24 05:06:09.916425] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.407 00:16:55.407 job0: (groupid=0, jobs=1): err= 0: pid=74376: Wed Jul 24 05:06:09 2024 00:16:55.407 read: IOPS=71, BW=9167KiB/s (9387kB/s)(80.0MiB/8936msec) 00:16:55.407 slat (usec): min=5, max=1052, avg=52.23, stdev=107.02 00:16:55.407 clat (usec): min=3731, max=39981, avg=10204.79, stdev=5298.88 00:16:55.407 lat (usec): min=3945, max=40001, avg=10257.02, stdev=5296.26 00:16:55.407 clat percentiles (usec): 00:16:55.407 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 5080], 20.00th=[ 6194], 00:16:55.407 | 30.00th=[ 6849], 40.00th=[ 7439], 50.00th=[ 8848], 60.00th=[ 9896], 00:16:55.407 | 70.00th=[11600], 80.00th=[13829], 90.00th=[16909], 95.00th=[19530], 00:16:55.407 | 99.00th=[30802], 99.50th=[35914], 99.90th=[40109], 99.95th=[40109], 00:16:55.407 | 99.99th=[40109] 00:16:55.407 write: IOPS=86, BW=10.8MiB/s (11.4MB/s)(100MiB/9226msec); 0 zone resets 00:16:55.407 slat (usec): min=33, max=5312, avg=145.99, stdev=311.72 00:16:55.407 clat (msec): min=2, max=242, avg=91.60, stdev=38.44 00:16:55.407 lat (msec): min=2, max=242, avg=91.75, stdev=38.45 00:16:55.407 clat percentiles (msec): 00:16:55.407 | 1.00th=[ 5], 5.00th=[ 23], 10.00th=[ 62], 20.00th=[ 66], 00:16:55.407 | 30.00th=[ 70], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 90], 00:16:55.407 | 70.00th=[ 105], 80.00th=[ 122], 90.00th=[ 146], 95.00th=[ 163], 00:16:55.407 | 99.00th=[ 199], 99.50th=[ 211], 99.90th=[ 243], 99.95th=[ 243], 00:16:55.407 | 99.99th=[ 243] 00:16:55.407 bw ( KiB/s): min= 5632, max=22784, per=0.94%, avg=10067.00, stdev=4233.78, samples=19 00:16:55.407 iops : min= 44, max= 178, avg=78.32, stdev=33.19, samples=19 00:16:55.407 lat (msec) : 4=0.56%, 10=27.64%, 20=16.46%, 50=2.92%, 100=34.58% 00:16:55.407 lat (msec) : 250=17.85% 00:16:55.407 cpu : usr=0.69%, sys=0.15%, ctx=2319, majf=0, minf=5 00:16:55.407 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.407 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.407 issued rwts: total=640,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.407 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.407 job1: (groupid=0, jobs=1): err= 0: pid=74388: Wed Jul 24 05:06:09 2024 00:16:55.407 read: IOPS=73, BW=9348KiB/s (9573kB/s)(80.0MiB/8763msec) 00:16:55.407 slat (usec): min=7, max=1350, avg=52.40, stdev=110.88 00:16:55.407 clat (usec): min=4836, max=38229, avg=9891.95, stdev=4913.43 00:16:55.407 lat (usec): min=4862, max=38255, avg=9944.35, stdev=4909.48 00:16:55.407 clat percentiles (usec): 00:16:55.407 | 1.00th=[ 5080], 5.00th=[ 5407], 10.00th=[ 5735], 20.00th=[ 6390], 00:16:55.407 | 30.00th=[ 6849], 40.00th=[ 7635], 50.00th=[ 9110], 60.00th=[ 9896], 00:16:55.407 | 70.00th=[10421], 80.00th=[11469], 90.00th=[15008], 95.00th=[17695], 00:16:55.407 | 99.00th=[33162], 99.50th=[33424], 99.90th=[38011], 99.95th=[38011], 00:16:55.407 | 99.99th=[38011] 00:16:55.407 write: IOPS=86, BW=10.8MiB/s (11.4MB/s)(100MiB/9219msec); 0 zone resets 00:16:55.407 slat (usec): min=41, max=21097, avg=154.11, stdev=768.66 00:16:55.407 clat (msec): min=2, max=270, avg=91.56, stdev=37.59 00:16:55.407 lat (msec): min=2, max=270, avg=91.71, stdev=37.57 00:16:55.407 clat percentiles (msec): 00:16:55.407 | 1.00th=[ 20], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 66], 00:16:55.407 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 85], 00:16:55.407 | 70.00th=[ 99], 80.00th=[ 122], 90.00th=[ 146], 95.00th=[ 167], 00:16:55.407 | 99.00th=[ 201], 99.50th=[ 232], 99.90th=[ 271], 99.95th=[ 271], 00:16:55.407 | 99.99th=[ 271] 00:16:55.407 bw ( KiB/s): min= 3840, max=18432, per=0.94%, avg=10055.47, stdev=3881.25, samples=19 00:16:55.407 iops : min= 30, max= 144, avg=78.11, stdev=30.33, samples=19 00:16:55.407 lat (msec) : 4=0.14%, 10=27.78%, 20=15.28%, 50=3.12%, 100=38.19% 00:16:55.407 lat (msec) : 250=15.21%, 500=0.28% 00:16:55.407 cpu : usr=0.49%, sys=0.37%, ctx=2206, majf=0, minf=3 00:16:55.407 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.407 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.407 issued rwts: total=640,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.407 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.407 job2: (groupid=0, jobs=1): err= 0: pid=74409: Wed Jul 24 05:06:09 2024 00:16:55.407 read: IOPS=73, BW=9425KiB/s (9651kB/s)(80.0MiB/8692msec) 00:16:55.407 slat (usec): min=7, max=1149, avg=43.03, stdev=86.64 00:16:55.407 clat (usec): min=4444, max=95931, avg=10836.63, stdev=9718.02 00:16:55.407 lat (usec): min=4461, max=96085, avg=10879.67, stdev=9717.43 00:16:55.407 clat percentiles (usec): 00:16:55.407 | 1.00th=[ 4883], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6849], 00:16:55.407 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[10028], 00:16:55.407 | 70.00th=[11338], 80.00th=[12387], 90.00th=[14222], 95.00th=[18482], 00:16:55.407 | 99.00th=[86508], 99.50th=[90702], 99.90th=[95945], 99.95th=[95945], 00:16:55.407 | 99.99th=[95945] 00:16:55.407 write: IOPS=84, BW=10.5MiB/s (11.0MB/s)(96.2MiB/9147msec); 0 zone resets 00:16:55.407 slat (usec): min=30, max=3636, avg=124.23, stdev=198.20 00:16:55.407 clat (msec): min=35, max=312, avg=94.47, stdev=38.57 00:16:55.407 lat (msec): min=35, max=312, avg=94.60, stdev=38.57 00:16:55.407 clat percentiles (msec): 00:16:55.407 | 1.00th=[ 42], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 67], 00:16:55.407 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 88], 00:16:55.407 | 70.00th=[ 99], 80.00th=[ 123], 90.00th=[ 153], 95.00th=[ 176], 00:16:55.407 | 99.00th=[ 230], 99.50th=[ 266], 99.90th=[ 313], 99.95th=[ 313], 00:16:55.407 | 99.99th=[ 313] 00:16:55.407 bw ( KiB/s): min= 3328, max=16416, per=0.91%, avg=9763.75, stdev=3924.19, samples=20 00:16:55.407 iops : min= 26, max= 128, avg=76.10, stdev=30.61, samples=20 00:16:55.407 lat (msec) : 10=27.59%, 20=16.24%, 50=1.56%, 100=38.94%, 250=15.39% 00:16:55.407 lat (msec) : 500=0.28% 00:16:55.407 cpu : usr=0.51%, sys=0.31%, ctx=2115, majf=0, minf=1 00:16:55.407 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.407 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.407 issued rwts: total=640,770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.407 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.407 job3: (groupid=0, jobs=1): err= 0: pid=74783: Wed Jul 24 05:06:09 2024 00:16:55.407 read: IOPS=77, BW=9905KiB/s (10.1MB/s)(79.4MiB/8206msec) 00:16:55.407 slat (usec): min=7, max=804, avg=61.38, stdev=99.16 00:16:55.407 clat (msec): min=4, max=227, avg=24.73, stdev=29.17 00:16:55.407 lat (msec): min=4, max=227, avg=24.79, stdev=29.17 00:16:55.407 clat percentiles (msec): 00:16:55.407 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 11], 00:16:55.407 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 19], 00:16:55.407 | 70.00th=[ 25], 80.00th=[ 28], 90.00th=[ 44], 95.00th=[ 75], 00:16:55.407 | 99.00th=[ 176], 99.50th=[ 197], 99.90th=[ 228], 99.95th=[ 228], 00:16:55.407 | 99.99th=[ 228] 00:16:55.407 write: IOPS=79, BW=9.97MiB/s (10.5MB/s)(80.0MiB/8025msec); 0 zone resets 00:16:55.407 slat (usec): min=39, max=14700, avg=165.78, stdev=636.44 00:16:55.407 clat (msec): min=24, max=334, avg=99.15, stdev=43.15 00:16:55.407 lat (msec): min=25, max=334, avg=99.32, stdev=43.11 00:16:55.407 clat percentiles (msec): 00:16:55.407 | 1.00th=[ 30], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 67], 00:16:55.407 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 93], 00:16:55.407 | 70.00th=[ 108], 80.00th=[ 130], 90.00th=[ 157], 95.00th=[ 180], 00:16:55.407 | 99.00th=[ 249], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 334], 00:16:55.407 | 99.99th=[ 334] 00:16:55.407 bw ( KiB/s): min= 512, max=15104, per=0.80%, avg=8520.32, stdev=4469.16, samples=19 00:16:55.407 iops : min= 4, max= 118, avg=66.47, stdev=34.86, samples=19 00:16:55.407 lat (msec) : 10=9.02%, 20=22.35%, 50=14.82%, 100=35.84%, 250=17.49% 00:16:55.407 lat (msec) : 500=0.47% 00:16:55.407 cpu : usr=0.54%, sys=0.21%, ctx=2128, majf=0, minf=7 00:16:55.407 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.407 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.407 issued rwts: total=635,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.407 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.407 job4: (groupid=0, jobs=1): err= 0: pid=75024: Wed Jul 24 05:06:09 2024 00:16:55.407 read: IOPS=77, BW=9862KiB/s (10.1MB/s)(80.0MiB/8307msec) 00:16:55.407 slat (usec): min=7, max=979, avg=51.14, stdev=84.03 00:16:55.407 clat (msec): min=4, max=349, avg=20.09, stdev=32.32 00:16:55.407 lat (msec): min=4, max=349, avg=20.14, stdev=32.32 00:16:55.407 clat percentiles (msec): 00:16:55.407 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 9], 00:16:55.407 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:16:55.407 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 28], 95.00th=[ 34], 00:16:55.407 | 99.00th=[ 197], 99.50th=[ 222], 99.90th=[ 351], 99.95th=[ 351], 00:16:55.407 | 99.99th=[ 351] 00:16:55.407 write: IOPS=82, BW=10.3MiB/s (10.8MB/s)(85.4MiB/8306msec); 0 zone resets 00:16:55.407 slat (usec): min=41, max=18889, avg=174.97, stdev=756.41 00:16:55.407 clat (msec): min=45, max=457, avg=96.09, stdev=52.94 00:16:55.407 lat (msec): min=47, max=457, avg=96.26, stdev=52.93 00:16:55.407 clat percentiles (msec): 00:16:55.408 | 1.00th=[ 56], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 68], 00:16:55.408 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 85], 00:16:55.408 | 70.00th=[ 94], 80.00th=[ 114], 90.00th=[ 140], 95.00th=[ 194], 00:16:55.408 | 99.00th=[ 388], 99.50th=[ 405], 99.90th=[ 460], 99.95th=[ 460], 00:16:55.408 | 99.99th=[ 460] 00:16:55.408 bw ( KiB/s): min= 1792, max=14336, per=0.85%, avg=9104.79, stdev=4388.42, samples=19 00:16:55.408 iops : min= 14, max= 112, avg=71.00, stdev=34.33, samples=19 00:16:55.408 lat (msec) : 10=12.40%, 20=21.84%, 50=13.38%, 100=37.87%, 250=13.23% 00:16:55.408 lat (msec) : 500=1.28% 00:16:55.408 cpu : usr=0.51%, sys=0.28%, ctx=2134, majf=0, minf=5 00:16:55.408 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.408 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.408 issued rwts: total=640,683,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.408 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.408 job5: (groupid=0, jobs=1): err= 0: pid=75025: Wed Jul 24 05:06:09 2024 00:16:55.408 read: IOPS=65, BW=8344KiB/s (8545kB/s)(60.0MiB/7363msec) 00:16:55.408 slat (usec): min=7, max=1308, avg=55.50, stdev=104.18 00:16:55.408 clat (msec): min=5, max=599, avg=22.13, stdev=68.00 00:16:55.408 lat (msec): min=5, max=599, avg=22.18, stdev=68.00 00:16:55.408 clat percentiles (msec): 00:16:55.408 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:16:55.408 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11], 00:16:55.408 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 20], 95.00th=[ 55], 00:16:55.408 | 99.00th=[ 523], 99.50th=[ 535], 99.90th=[ 600], 99.95th=[ 600], 00:16:55.408 | 99.99th=[ 600] 00:16:55.408 write: IOPS=71, BW=9144KiB/s (9363kB/s)(77.9MiB/8721msec); 0 zone resets 00:16:55.408 slat (usec): min=40, max=27757, avg=204.66, stdev=1139.52 00:16:55.408 clat (msec): min=61, max=381, avg=110.92, stdev=44.72 00:16:55.408 lat (msec): min=61, max=381, avg=111.13, stdev=44.70 00:16:55.408 clat percentiles (msec): 00:16:55.408 | 1.00th=[ 64], 5.00th=[ 67], 10.00th=[ 69], 20.00th=[ 77], 00:16:55.408 | 30.00th=[ 83], 40.00th=[ 88], 50.00th=[ 95], 60.00th=[ 108], 00:16:55.408 | 70.00th=[ 126], 80.00th=[ 142], 90.00th=[ 174], 95.00th=[ 194], 00:16:55.408 | 99.00th=[ 253], 99.50th=[ 317], 99.90th=[ 380], 99.95th=[ 380], 00:16:55.408 | 99.99th=[ 380] 00:16:55.408 bw ( KiB/s): min= 3072, max=13056, per=0.78%, avg=8293.42, stdev=3531.90, samples=19 00:16:55.408 iops : min= 24, max= 102, avg=64.74, stdev=27.60, samples=19 00:16:55.408 lat (msec) : 10=24.21%, 20=15.32%, 50=1.63%, 100=32.18%, 250=25.20% 00:16:55.408 lat (msec) : 500=1.00%, 750=0.45% 00:16:55.408 cpu : usr=0.48%, sys=0.19%, ctx=1868, majf=0, minf=7 00:16:55.408 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.408 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.408 issued rwts: total=480,623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.408 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.408 job6: (groupid=0, jobs=1): err= 0: pid=75026: Wed Jul 24 05:06:09 2024 00:16:55.408 read: IOPS=74, BW=9478KiB/s (9706kB/s)(80.0MiB/8643msec) 00:16:55.408 slat (usec): min=5, max=788, avg=40.83, stdev=72.87 00:16:55.408 clat (usec): min=4431, max=56902, avg=12186.68, stdev=6828.76 00:16:55.408 lat (usec): min=4550, max=56944, avg=12227.52, stdev=6832.66 00:16:55.408 clat percentiles (usec): 00:16:55.408 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6521], 20.00th=[ 7504], 00:16:55.408 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[11338], 00:16:55.408 | 70.00th=[13173], 80.00th=[15795], 90.00th=[19006], 95.00th=[23462], 00:16:55.408 | 99.00th=[43779], 99.50th=[55313], 99.90th=[56886], 99.95th=[56886], 00:16:55.408 | 99.99th=[56886] 00:16:55.408 write: IOPS=84, BW=10.6MiB/s (11.1MB/s)(95.9MiB/9054msec); 0 zone resets 00:16:55.408 slat (usec): min=38, max=976, avg=124.69, stdev=136.37 00:16:55.408 clat (msec): min=18, max=367, avg=93.82, stdev=46.68 00:16:55.408 lat (msec): min=18, max=367, avg=93.95, stdev=46.69 00:16:55.408 clat percentiles (msec): 00:16:55.408 | 1.00th=[ 28], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 66], 00:16:55.408 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 85], 00:16:55.408 | 70.00th=[ 91], 80.00th=[ 108], 90.00th=[ 148], 95.00th=[ 184], 00:16:55.408 | 99.00th=[ 313], 99.50th=[ 351], 99.90th=[ 368], 99.95th=[ 368], 00:16:55.408 | 99.99th=[ 368] 00:16:55.408 bw ( KiB/s): min= 1024, max=15616, per=0.91%, avg=9726.95, stdev=4364.59, samples=20 00:16:55.408 iops : min= 8, max= 122, avg=75.90, stdev=34.21, samples=20 00:16:55.408 lat (msec) : 10=21.61%, 20=19.83%, 50=4.26%, 100=41.72%, 250=11.73% 00:16:55.408 lat (msec) : 500=0.85% 00:16:55.408 cpu : usr=0.63%, sys=0.18%, ctx=2230, majf=0, minf=7 00:16:55.408 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.408 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.408 issued rwts: total=640,767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.408 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.408 job7: (groupid=0, jobs=1): err= 0: pid=75027: Wed Jul 24 05:06:09 2024 00:16:55.408 read: IOPS=71, BW=9174KiB/s (9394kB/s)(80.0MiB/8930msec) 00:16:55.408 slat (usec): min=8, max=906, avg=51.65, stdev=91.97 00:16:55.408 clat (msec): min=3, max=259, avg=21.10, stdev=32.36 00:16:55.408 lat (msec): min=3, max=259, avg=21.15, stdev=32.36 00:16:55.408 clat percentiles (msec): 00:16:55.408 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:16:55.408 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 15], 00:16:55.408 | 70.00th=[ 16], 80.00th=[ 20], 90.00th=[ 29], 95.00th=[ 75], 00:16:55.408 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 259], 99.95th=[ 259], 00:16:55.408 | 99.99th=[ 259] 00:16:55.408 write: IOPS=86, BW=10.8MiB/s (11.3MB/s)(90.0MiB/8361msec); 0 zone resets 00:16:55.408 slat (usec): min=33, max=14933, avg=150.93, stdev=586.87 00:16:55.408 clat (msec): min=3, max=468, avg=92.19, stdev=51.61 00:16:55.408 lat (msec): min=3, max=468, avg=92.34, stdev=51.59 00:16:55.408 clat percentiles (msec): 00:16:55.408 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 62], 20.00th=[ 65], 00:16:55.408 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 85], 00:16:55.408 | 70.00th=[ 94], 80.00th=[ 109], 90.00th=[ 153], 95.00th=[ 184], 00:16:55.408 | 99.00th=[ 321], 99.50th=[ 376], 99.90th=[ 468], 99.95th=[ 468], 00:16:55.408 | 99.99th=[ 468] 00:16:55.408 bw ( KiB/s): min= 2799, max=23296, per=0.90%, avg=9567.53, stdev=5174.64, samples=19 00:16:55.408 iops : min= 21, max= 182, avg=74.26, stdev=40.44, samples=19 00:16:55.408 lat (msec) : 4=0.22%, 10=11.32%, 20=28.68%, 50=6.84%, 100=39.34% 00:16:55.408 lat (msec) : 250=12.21%, 500=1.40% 00:16:55.408 cpu : usr=0.49%, sys=0.32%, ctx=2182, majf=0, minf=3 00:16:55.408 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.408 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.408 issued rwts: total=640,720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.408 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.408 job8: (groupid=0, jobs=1): err= 0: pid=75028: Wed Jul 24 05:06:09 2024 00:16:55.408 read: IOPS=83, BW=10.4MiB/s (10.9MB/s)(80.0MiB/7664msec) 00:16:55.408 slat (usec): min=7, max=864, avg=39.26, stdev=77.11 00:16:55.408 clat (usec): min=4184, max=95468, avg=15537.28, stdev=11656.36 00:16:55.408 lat (usec): min=4197, max=95484, avg=15576.54, stdev=11667.10 00:16:55.408 clat percentiles (usec): 00:16:55.408 | 1.00th=[ 4686], 5.00th=[ 5342], 10.00th=[ 6128], 20.00th=[ 8717], 00:16:55.408 | 30.00th=[ 9372], 40.00th=[10683], 50.00th=[12518], 60.00th=[14353], 00:16:55.408 | 70.00th=[16450], 80.00th=[20055], 90.00th=[25560], 95.00th=[39060], 00:16:55.408 | 99.00th=[62653], 99.50th=[67634], 99.90th=[95945], 99.95th=[95945], 00:16:55.408 | 99.99th=[95945] 00:16:55.408 write: IOPS=73, BW=9457KiB/s (9684kB/s)(81.0MiB/8771msec); 0 zone resets 00:16:55.408 slat (usec): min=38, max=1306, avg=154.29, stdev=185.03 00:16:55.408 clat (msec): min=59, max=310, avg=107.55, stdev=37.66 00:16:55.408 lat (msec): min=59, max=310, avg=107.71, stdev=37.66 00:16:55.408 clat percentiles (msec): 00:16:55.408 | 1.00th=[ 62], 5.00th=[ 66], 10.00th=[ 69], 20.00th=[ 78], 00:16:55.408 | 30.00th=[ 84], 40.00th=[ 89], 50.00th=[ 96], 60.00th=[ 108], 00:16:55.408 | 70.00th=[ 121], 80.00th=[ 136], 90.00th=[ 161], 95.00th=[ 180], 00:16:55.408 | 99.00th=[ 213], 99.50th=[ 268], 99.90th=[ 313], 99.95th=[ 313], 00:16:55.408 | 99.99th=[ 313] 00:16:55.408 bw ( KiB/s): min= 3065, max=14080, per=0.77%, avg=8190.80, stdev=3239.18, samples=20 00:16:55.408 iops : min= 23, max= 110, avg=63.85, stdev=25.40, samples=20 00:16:55.408 lat (msec) : 10=17.47%, 20=22.28%, 50=8.62%, 100=27.64%, 250=23.68% 00:16:55.408 lat (msec) : 500=0.31% 00:16:55.408 cpu : usr=0.51%, sys=0.25%, ctx=2084, majf=0, minf=5 00:16:55.408 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.408 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.408 issued rwts: total=640,648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.408 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.408 job9: (groupid=0, jobs=1): err= 0: pid=75029: Wed Jul 24 05:06:09 2024 00:16:55.408 read: IOPS=75, BW=9602KiB/s (9832kB/s)(80.0MiB/8532msec) 00:16:55.408 slat (usec): min=7, max=1127, avg=58.87, stdev=109.52 00:16:55.408 clat (usec): min=7361, max=86852, avg=19920.70, stdev=9412.87 00:16:55.408 lat (usec): min=7386, max=86860, avg=19979.57, stdev=9415.90 00:16:55.408 clat percentiles (usec): 00:16:55.408 | 1.00th=[ 8356], 5.00th=[10683], 10.00th=[11731], 20.00th=[12780], 00:16:55.408 | 30.00th=[13960], 40.00th=[16057], 50.00th=[17957], 60.00th=[19792], 00:16:55.408 | 70.00th=[22152], 80.00th=[25560], 90.00th=[29754], 95.00th=[34866], 00:16:55.408 | 99.00th=[54789], 99.50th=[73925], 99.90th=[86508], 99.95th=[86508], 00:16:55.408 | 99.99th=[86508] 00:16:55.408 write: IOPS=85, BW=10.7MiB/s (11.3MB/s)(90.5MiB/8429msec); 0 zone resets 00:16:55.409 slat (usec): min=40, max=2539, avg=142.02, stdev=207.44 00:16:55.409 clat (msec): min=56, max=447, avg=92.23, stdev=47.20 00:16:55.409 lat (msec): min=56, max=448, avg=92.37, stdev=47.19 00:16:55.409 clat percentiles (msec): 00:16:55.409 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 66], 00:16:55.409 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 84], 00:16:55.409 | 70.00th=[ 90], 80.00th=[ 104], 90.00th=[ 127], 95.00th=[ 182], 00:16:55.409 | 99.00th=[ 363], 99.50th=[ 368], 99.90th=[ 447], 99.95th=[ 447], 00:16:55.409 | 99.99th=[ 447] 00:16:55.409 bw ( KiB/s): min= 256, max=14848, per=0.86%, avg=9176.05, stdev=4751.91, samples=20 00:16:55.409 iops : min= 2, max= 116, avg=71.60, stdev=37.17, samples=20 00:16:55.409 lat (msec) : 10=1.76%, 20=26.39%, 50=18.18%, 100=42.45%, 250=10.19% 00:16:55.409 lat (msec) : 500=1.03% 00:16:55.409 cpu : usr=0.58%, sys=0.23%, ctx=2265, majf=0, minf=1 00:16:55.409 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.409 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.409 issued rwts: total=640,724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.409 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.409 job10: (groupid=0, jobs=1): err= 0: pid=75030: Wed Jul 24 05:06:09 2024 00:16:55.409 read: IOPS=62, BW=8061KiB/s (8255kB/s)(61.6MiB/7828msec) 00:16:55.409 slat (usec): min=8, max=1797, avg=69.74, stdev=157.69 00:16:55.409 clat (msec): min=4, max=235, avg=20.43, stdev=35.07 00:16:55.409 lat (msec): min=4, max=236, avg=20.50, stdev=35.07 00:16:55.409 clat percentiles (msec): 00:16:55.409 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 9], 00:16:55.409 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 14], 00:16:55.409 | 70.00th=[ 16], 80.00th=[ 19], 90.00th=[ 28], 95.00th=[ 57], 00:16:55.409 | 99.00th=[ 228], 99.50th=[ 236], 99.90th=[ 236], 99.95th=[ 236], 00:16:55.409 | 99.99th=[ 236] 00:16:55.409 write: IOPS=72, BW=9331KiB/s (9555kB/s)(80.0MiB/8779msec); 0 zone resets 00:16:55.409 slat (usec): min=49, max=2313, avg=157.75, stdev=221.40 00:16:55.409 clat (msec): min=2, max=242, avg=108.99, stdev=45.52 00:16:55.409 lat (msec): min=2, max=242, avg=109.15, stdev=45.51 00:16:55.409 clat percentiles (msec): 00:16:55.409 | 1.00th=[ 8], 5.00th=[ 44], 10.00th=[ 63], 20.00th=[ 72], 00:16:55.409 | 30.00th=[ 81], 40.00th=[ 92], 50.00th=[ 103], 60.00th=[ 115], 00:16:55.409 | 70.00th=[ 131], 80.00th=[ 150], 90.00th=[ 174], 95.00th=[ 190], 00:16:55.409 | 99.00th=[ 218], 99.50th=[ 226], 99.90th=[ 243], 99.95th=[ 243], 00:16:55.409 | 99.99th=[ 243] 00:16:55.409 bw ( KiB/s): min= 2560, max=17664, per=0.76%, avg=8160.85, stdev=3397.98, samples=20 00:16:55.409 iops : min= 20, max= 138, avg=63.50, stdev=26.44, samples=20 00:16:55.409 lat (msec) : 4=0.18%, 10=18.53%, 20=19.95%, 50=5.21%, 100=25.07% 00:16:55.409 lat (msec) : 250=31.07% 00:16:55.409 cpu : usr=0.51%, sys=0.27%, ctx=1907, majf=0, minf=7 00:16:55.409 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.409 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.409 issued rwts: total=493,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.409 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.409 job11: (groupid=0, jobs=1): err= 0: pid=75031: Wed Jul 24 05:06:09 2024 00:16:55.409 read: IOPS=59, BW=7591KiB/s (7773kB/s)(60.0MiB/8094msec) 00:16:55.409 slat (usec): min=7, max=1831, avg=52.22, stdev=121.64 00:16:55.409 clat (msec): min=5, max=525, avg=33.01, stdev=69.63 00:16:55.409 lat (msec): min=5, max=525, avg=33.07, stdev=69.62 00:16:55.409 clat percentiles (msec): 00:16:55.409 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 10], 00:16:55.409 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 16], 00:16:55.409 | 70.00th=[ 20], 80.00th=[ 25], 90.00th=[ 54], 95.00th=[ 125], 00:16:55.409 | 99.00th=[ 401], 99.50th=[ 447], 99.90th=[ 527], 99.95th=[ 527], 00:16:55.409 | 99.99th=[ 527] 00:16:55.409 write: IOPS=75, BW=9703KiB/s (9936kB/s)(76.2MiB/8047msec); 0 zone resets 00:16:55.409 slat (usec): min=49, max=1704, avg=133.29, stdev=154.08 00:16:55.409 clat (msec): min=52, max=377, avg=104.73, stdev=48.69 00:16:55.409 lat (msec): min=52, max=377, avg=104.86, stdev=48.70 00:16:55.409 clat percentiles (msec): 00:16:55.409 | 1.00th=[ 59], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 66], 00:16:55.409 | 30.00th=[ 71], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 100], 00:16:55.409 | 70.00th=[ 123], 80.00th=[ 142], 90.00th=[ 171], 95.00th=[ 197], 00:16:55.409 | 99.00th=[ 262], 99.50th=[ 317], 99.90th=[ 376], 99.95th=[ 376], 00:16:55.409 | 99.99th=[ 376] 00:16:55.409 bw ( KiB/s): min= 1021, max=14592, per=0.79%, avg=8447.18, stdev=4431.43, samples=17 00:16:55.409 iops : min= 7, max= 114, avg=65.82, stdev=34.71, samples=17 00:16:55.409 lat (msec) : 10=9.17%, 20=22.94%, 50=7.34%, 100=35.60%, 250=22.75% 00:16:55.409 lat (msec) : 500=2.11%, 750=0.09% 00:16:55.409 cpu : usr=0.62%, sys=0.13%, ctx=1796, majf=0, minf=3 00:16:55.409 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.409 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.409 issued rwts: total=480,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.409 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.409 job12: (groupid=0, jobs=1): err= 0: pid=75032: Wed Jul 24 05:06:09 2024 00:16:55.409 read: IOPS=75, BW=9623KiB/s (9854kB/s)(80.0MiB/8513msec) 00:16:55.409 slat (usec): min=8, max=1644, avg=65.65, stdev=149.76 00:16:55.409 clat (usec): min=11813, max=89499, avg=21115.83, stdev=9278.83 00:16:55.409 lat (usec): min=11979, max=89515, avg=21181.48, stdev=9266.00 00:16:55.409 clat percentiles (usec): 00:16:55.409 | 1.00th=[12518], 5.00th=[13304], 10.00th=[13829], 20.00th=[15401], 00:16:55.409 | 30.00th=[16450], 40.00th=[17171], 50.00th=[18220], 60.00th=[20317], 00:16:55.409 | 70.00th=[22414], 80.00th=[25822], 90.00th=[30016], 95.00th=[32900], 00:16:55.409 | 99.00th=[78119], 99.50th=[79168], 99.90th=[89654], 99.95th=[89654], 00:16:55.409 | 99.99th=[89654] 00:16:55.409 write: IOPS=88, BW=11.0MiB/s (11.6MB/s)(92.2MiB/8356msec); 0 zone resets 00:16:55.409 slat (usec): min=44, max=18703, avg=173.42, stdev=708.06 00:16:55.409 clat (msec): min=11, max=278, avg=89.49, stdev=39.64 00:16:55.409 lat (msec): min=11, max=278, avg=89.66, stdev=39.63 00:16:55.409 clat percentiles (msec): 00:16:55.409 | 1.00th=[ 30], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 64], 00:16:55.409 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 77], 60.00th=[ 87], 00:16:55.409 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 131], 95.00th=[ 176], 00:16:55.409 | 99.00th=[ 257], 99.50th=[ 266], 99.90th=[ 279], 99.95th=[ 279], 00:16:55.409 | 99.99th=[ 279] 00:16:55.409 bw ( KiB/s): min= 1009, max=15616, per=0.87%, avg=9343.20, stdev=4788.60, samples=20 00:16:55.409 iops : min= 7, max= 122, avg=72.80, stdev=37.46, samples=20 00:16:55.409 lat (msec) : 20=27.79%, 50=19.38%, 100=38.75%, 250=13.50%, 500=0.58% 00:16:55.409 cpu : usr=0.70%, sys=0.29%, ctx=2288, majf=0, minf=5 00:16:55.409 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.409 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.409 issued rwts: total=640,738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.409 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.409 job13: (groupid=0, jobs=1): err= 0: pid=75033: Wed Jul 24 05:06:09 2024 00:16:55.409 read: IOPS=74, BW=9546KiB/s (9775kB/s)(80.0MiB/8582msec) 00:16:55.409 slat (usec): min=7, max=1044, avg=62.04, stdev=106.23 00:16:55.409 clat (usec): min=9942, max=52085, avg=20098.66, stdev=7201.77 00:16:55.409 lat (usec): min=9964, max=52104, avg=20160.69, stdev=7196.05 00:16:55.409 clat percentiles (usec): 00:16:55.409 | 1.00th=[11076], 5.00th=[12518], 10.00th=[13304], 20.00th=[14484], 00:16:55.409 | 30.00th=[15664], 40.00th=[16909], 50.00th=[17957], 60.00th=[19268], 00:16:55.409 | 70.00th=[20579], 80.00th=[25560], 90.00th=[31851], 95.00th=[33817], 00:16:55.409 | 99.00th=[45876], 99.50th=[50070], 99.90th=[52167], 99.95th=[52167], 00:16:55.409 | 99.99th=[52167] 00:16:55.409 write: IOPS=88, BW=11.0MiB/s (11.5MB/s)(92.8MiB/8428msec); 0 zone resets 00:16:55.409 slat (usec): min=41, max=9486, avg=172.71, stdev=473.43 00:16:55.409 clat (msec): min=22, max=274, avg=89.88, stdev=34.59 00:16:55.409 lat (msec): min=23, max=274, avg=90.05, stdev=34.59 00:16:55.409 clat percentiles (msec): 00:16:55.409 | 1.00th=[ 30], 5.00th=[ 62], 10.00th=[ 62], 20.00th=[ 65], 00:16:55.409 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 88], 00:16:55.409 | 70.00th=[ 95], 80.00th=[ 107], 90.00th=[ 129], 95.00th=[ 163], 00:16:55.409 | 99.00th=[ 230], 99.50th=[ 251], 99.90th=[ 275], 99.95th=[ 275], 00:16:55.409 | 99.99th=[ 275] 00:16:55.409 bw ( KiB/s): min= 768, max=16160, per=0.93%, avg=9890.53, stdev=4358.47, samples=19 00:16:55.409 iops : min= 6, max= 126, avg=77.16, stdev=34.18, samples=19 00:16:55.409 lat (msec) : 10=0.07%, 20=30.39%, 50=16.21%, 100=40.16%, 250=12.88% 00:16:55.409 lat (msec) : 500=0.29% 00:16:55.409 cpu : usr=0.70%, sys=0.27%, ctx=2337, majf=0, minf=1 00:16:55.409 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.409 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.409 issued rwts: total=640,742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.409 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.409 job14: (groupid=0, jobs=1): err= 0: pid=75034: Wed Jul 24 05:06:09 2024 00:16:55.409 read: IOPS=78, BW=9.81MiB/s (10.3MB/s)(80.0MiB/8153msec) 00:16:55.409 slat (usec): min=7, max=1050, avg=55.46, stdev=101.03 00:16:55.409 clat (usec): min=4228, max=49446, avg=15908.21, stdev=6856.09 00:16:55.409 lat (usec): min=4845, max=49642, avg=15963.68, stdev=6852.41 00:16:55.409 clat percentiles (usec): 00:16:55.409 | 1.00th=[ 5538], 5.00th=[ 6980], 10.00th=[ 8356], 20.00th=[10683], 00:16:55.409 | 30.00th=[11863], 40.00th=[13566], 50.00th=[15008], 60.00th=[16909], 00:16:55.409 | 70.00th=[18482], 80.00th=[19268], 90.00th=[22152], 95.00th=[32900], 00:16:55.410 | 99.00th=[40109], 99.50th=[45351], 99.90th=[49546], 99.95th=[49546], 00:16:55.410 | 99.99th=[49546] 00:16:55.410 write: IOPS=79, BW=9.92MiB/s (10.4MB/s)(87.0MiB/8767msec); 0 zone resets 00:16:55.410 slat (usec): min=40, max=6579, avg=142.64, stdev=294.35 00:16:55.410 clat (msec): min=38, max=408, avg=99.73, stdev=55.01 00:16:55.410 lat (msec): min=38, max=409, avg=99.87, stdev=55.01 00:16:55.410 clat percentiles (msec): 00:16:55.410 | 1.00th=[ 44], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 64], 00:16:55.410 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 90], 00:16:55.410 | 70.00th=[ 104], 80.00th=[ 122], 90.00th=[ 167], 95.00th=[ 215], 00:16:55.410 | 99.00th=[ 330], 99.50th=[ 359], 99.90th=[ 409], 99.95th=[ 409], 00:16:55.410 | 99.99th=[ 409] 00:16:55.410 bw ( KiB/s): min= 512, max=15360, per=0.83%, avg=8816.65, stdev=4915.81, samples=20 00:16:55.410 iops : min= 4, max= 120, avg=68.75, stdev=38.43, samples=20 00:16:55.410 lat (msec) : 10=8.16%, 20=31.89%, 50=8.46%, 100=34.58%, 250=15.12% 00:16:55.410 lat (msec) : 500=1.80% 00:16:55.410 cpu : usr=0.63%, sys=0.31%, ctx=2180, majf=0, minf=1 00:16:55.410 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.410 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.410 issued rwts: total=640,696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.410 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.410 job15: (groupid=0, jobs=1): err= 0: pid=75035: Wed Jul 24 05:06:09 2024 00:16:55.410 read: IOPS=63, BW=8171KiB/s (8367kB/s)(60.0MiB/7519msec) 00:16:55.410 slat (usec): min=7, max=900, avg=52.62, stdev=101.84 00:16:55.410 clat (msec): min=4, max=454, avg=29.82, stdev=65.37 00:16:55.410 lat (msec): min=4, max=454, avg=29.87, stdev=65.37 00:16:55.410 clat percentiles (msec): 00:16:55.410 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:16:55.410 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 16], 00:16:55.410 | 70.00th=[ 18], 80.00th=[ 20], 90.00th=[ 29], 95.00th=[ 144], 00:16:55.410 | 99.00th=[ 447], 99.50th=[ 451], 99.90th=[ 456], 99.95th=[ 456], 00:16:55.410 | 99.99th=[ 456] 00:16:55.410 write: IOPS=73, BW=9411KiB/s (9637kB/s)(75.8MiB/8242msec); 0 zone resets 00:16:55.410 slat (usec): min=46, max=3454, avg=176.51, stdev=281.66 00:16:55.410 clat (msec): min=50, max=236, avg=108.19, stdev=39.14 00:16:55.410 lat (msec): min=50, max=236, avg=108.37, stdev=39.14 00:16:55.410 clat percentiles (msec): 00:16:55.410 | 1.00th=[ 61], 5.00th=[ 65], 10.00th=[ 68], 20.00th=[ 75], 00:16:55.410 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 96], 60.00th=[ 109], 00:16:55.410 | 70.00th=[ 125], 80.00th=[ 140], 90.00th=[ 171], 95.00th=[ 188], 00:16:55.410 | 99.00th=[ 224], 99.50th=[ 230], 99.90th=[ 236], 99.95th=[ 236], 00:16:55.410 | 99.99th=[ 236] 00:16:55.410 bw ( KiB/s): min= 1792, max=14336, per=0.74%, avg=7905.06, stdev=3700.65, samples=18 00:16:55.410 iops : min= 14, max= 112, avg=61.61, stdev=28.90, samples=18 00:16:55.410 lat (msec) : 10=8.01%, 20=27.99%, 50=5.43%, 100=29.37%, 250=28.27% 00:16:55.410 lat (msec) : 500=0.92% 00:16:55.410 cpu : usr=0.55%, sys=0.20%, ctx=1847, majf=0, minf=3 00:16:55.410 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.410 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.410 issued rwts: total=480,606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.410 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.410 job16: (groupid=0, jobs=1): err= 0: pid=75036: Wed Jul 24 05:06:09 2024 00:16:55.410 read: IOPS=73, BW=9397KiB/s (9622kB/s)(80.0MiB/8718msec) 00:16:55.410 slat (usec): min=8, max=1044, avg=51.36, stdev=99.74 00:16:55.410 clat (usec): min=7012, max=47252, avg=15862.78, stdev=5963.25 00:16:55.410 lat (usec): min=7034, max=47326, avg=15914.14, stdev=5966.34 00:16:55.410 clat percentiles (usec): 00:16:55.410 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11600], 00:16:55.410 | 30.00th=[12387], 40.00th=[13173], 50.00th=[14353], 60.00th=[15270], 00:16:55.410 | 70.00th=[16909], 80.00th=[18744], 90.00th=[22414], 95.00th=[29754], 00:16:55.410 | 99.00th=[38536], 99.50th=[40109], 99.90th=[47449], 99.95th=[47449], 00:16:55.410 | 99.99th=[47449] 00:16:55.410 write: IOPS=84, BW=10.6MiB/s (11.1MB/s)(92.9MiB/8791msec); 0 zone resets 00:16:55.410 slat (usec): min=51, max=1874, avg=145.65, stdev=175.85 00:16:55.410 clat (msec): min=17, max=452, avg=93.83, stdev=51.17 00:16:55.410 lat (msec): min=17, max=452, avg=93.98, stdev=51.15 00:16:55.410 clat percentiles (msec): 00:16:55.410 | 1.00th=[ 28], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 65], 00:16:55.410 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 86], 00:16:55.410 | 70.00th=[ 93], 80.00th=[ 108], 90.00th=[ 136], 95.00th=[ 178], 00:16:55.410 | 99.00th=[ 313], 99.50th=[ 409], 99.90th=[ 451], 99.95th=[ 451], 00:16:55.410 | 99.99th=[ 451] 00:16:55.410 bw ( KiB/s): min= 1536, max=15903, per=0.88%, avg=9406.95, stdev=4642.60, samples=20 00:16:55.410 iops : min= 12, max= 124, avg=73.30, stdev=36.25, samples=20 00:16:55.410 lat (msec) : 10=2.96%, 20=35.57%, 50=8.82%, 100=39.12%, 250=12.00% 00:16:55.410 lat (msec) : 500=1.52% 00:16:55.410 cpu : usr=0.63%, sys=0.34%, ctx=2265, majf=0, minf=1 00:16:55.410 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.410 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.410 issued rwts: total=640,743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.410 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.410 job17: (groupid=0, jobs=1): err= 0: pid=75037: Wed Jul 24 05:06:09 2024 00:16:55.410 read: IOPS=75, BW=9657KiB/s (9889kB/s)(80.0MiB/8483msec) 00:16:55.410 slat (usec): min=7, max=1095, avg=48.60, stdev=86.66 00:16:55.410 clat (usec): min=6987, max=39138, avg=16601.49, stdev=5989.76 00:16:55.410 lat (usec): min=7004, max=39226, avg=16650.09, stdev=5987.40 00:16:55.410 clat percentiles (usec): 00:16:55.410 | 1.00th=[ 7898], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[11863], 00:16:55.410 | 30.00th=[13173], 40.00th=[14353], 50.00th=[15795], 60.00th=[17171], 00:16:55.410 | 70.00th=[18220], 80.00th=[20317], 90.00th=[24249], 95.00th=[29754], 00:16:55.410 | 99.00th=[35914], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:16:55.410 | 99.99th=[39060] 00:16:55.410 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(95.4MiB/8696msec); 0 zone resets 00:16:55.410 slat (usec): min=47, max=24187, avg=175.83, stdev=921.21 00:16:55.410 clat (msec): min=49, max=265, avg=90.03, stdev=36.37 00:16:55.410 lat (msec): min=50, max=265, avg=90.21, stdev=36.35 00:16:55.410 clat percentiles (msec): 00:16:55.410 | 1.00th=[ 56], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 65], 00:16:55.410 | 30.00th=[ 68], 40.00th=[ 73], 50.00th=[ 77], 60.00th=[ 83], 00:16:55.410 | 70.00th=[ 93], 80.00th=[ 108], 90.00th=[ 136], 95.00th=[ 178], 00:16:55.410 | 99.00th=[ 234], 99.50th=[ 247], 99.90th=[ 266], 99.95th=[ 266], 00:16:55.410 | 99.99th=[ 266] 00:16:55.410 bw ( KiB/s): min= 1792, max=15104, per=0.91%, avg=9670.20, stdev=4442.06, samples=20 00:16:55.410 iops : min= 14, max= 118, avg=75.45, stdev=34.69, samples=20 00:16:55.410 lat (msec) : 10=5.63%, 20=30.65%, 50=9.41%, 100=40.48%, 250=13.68% 00:16:55.410 lat (msec) : 500=0.14% 00:16:55.410 cpu : usr=0.72%, sys=0.26%, ctx=2320, majf=0, minf=3 00:16:55.410 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.410 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.410 issued rwts: total=640,763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.410 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.410 job18: (groupid=0, jobs=1): err= 0: pid=75038: Wed Jul 24 05:06:09 2024 00:16:55.410 read: IOPS=75, BW=9659KiB/s (9891kB/s)(80.0MiB/8481msec) 00:16:55.410 slat (usec): min=7, max=1308, avg=58.01, stdev=119.32 00:16:55.410 clat (msec): min=10, max=150, avg=22.58, stdev=15.33 00:16:55.410 lat (msec): min=10, max=150, avg=22.64, stdev=15.33 00:16:55.410 clat percentiles (msec): 00:16:55.410 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 15], 00:16:55.410 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 19], 60.00th=[ 21], 00:16:55.410 | 70.00th=[ 24], 80.00th=[ 28], 90.00th=[ 33], 95.00th=[ 40], 00:16:55.410 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 150], 99.95th=[ 150], 00:16:55.410 | 99.99th=[ 150] 00:16:55.410 write: IOPS=86, BW=10.9MiB/s (11.4MB/s)(89.2MiB/8218msec); 0 zone resets 00:16:55.410 slat (usec): min=43, max=2835, avg=138.04, stdev=182.26 00:16:55.410 clat (msec): min=30, max=307, avg=91.08, stdev=43.52 00:16:55.410 lat (msec): min=30, max=308, avg=91.22, stdev=43.54 00:16:55.410 clat percentiles (msec): 00:16:55.410 | 1.00th=[ 37], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 64], 00:16:55.410 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 77], 60.00th=[ 84], 00:16:55.410 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 138], 95.00th=[ 190], 00:16:55.410 | 99.00th=[ 271], 99.50th=[ 292], 99.90th=[ 309], 99.95th=[ 309], 00:16:55.410 | 99.99th=[ 309] 00:16:55.410 bw ( KiB/s): min= 255, max=15360, per=0.89%, avg=9524.32, stdev=4751.23, samples=19 00:16:55.410 iops : min= 1, max= 120, avg=74.26, stdev=37.19, samples=19 00:16:55.410 lat (msec) : 20=28.14%, 50=18.69%, 100=39.96%, 250=12.04%, 500=1.18% 00:16:55.410 cpu : usr=0.63%, sys=0.31%, ctx=2180, majf=0, minf=3 00:16:55.410 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.410 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.410 issued rwts: total=640,714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.410 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.410 job19: (groupid=0, jobs=1): err= 0: pid=75039: Wed Jul 24 05:06:09 2024 00:16:55.410 read: IOPS=79, BW=9.94MiB/s (10.4MB/s)(80.0MiB/8046msec) 00:16:55.410 slat (usec): min=7, max=1492, avg=50.06, stdev=95.51 00:16:55.410 clat (usec): min=5333, max=46748, avg=13836.71, stdev=6619.25 00:16:55.410 lat (usec): min=5374, max=46839, avg=13886.78, stdev=6618.87 00:16:55.410 clat percentiles (usec): 00:16:55.410 | 1.00th=[ 5407], 5.00th=[ 6063], 10.00th=[ 7635], 20.00th=[ 9241], 00:16:55.411 | 30.00th=[10290], 40.00th=[11076], 50.00th=[12125], 60.00th=[13698], 00:16:55.411 | 70.00th=[15008], 80.00th=[16712], 90.00th=[21890], 95.00th=[28967], 00:16:55.411 | 99.00th=[41681], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:16:55.411 | 99.99th=[46924] 00:16:55.411 write: IOPS=77, BW=9926KiB/s (10.2MB/s)(86.5MiB/8924msec); 0 zone resets 00:16:55.411 slat (usec): min=46, max=4431, avg=150.26, stdev=248.99 00:16:55.411 clat (msec): min=50, max=346, avg=102.12, stdev=49.74 00:16:55.411 lat (msec): min=50, max=346, avg=102.27, stdev=49.75 00:16:55.411 clat percentiles (msec): 00:16:55.411 | 1.00th=[ 60], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 67], 00:16:55.411 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 93], 00:16:55.411 | 70.00th=[ 108], 80.00th=[ 142], 90.00th=[ 176], 95.00th=[ 203], 00:16:55.411 | 99.00th=[ 279], 99.50th=[ 321], 99.90th=[ 347], 99.95th=[ 347], 00:16:55.411 | 99.99th=[ 347] 00:16:55.411 bw ( KiB/s): min= 2560, max=15104, per=0.82%, avg=8797.58, stdev=4338.64, samples=19 00:16:55.411 iops : min= 20, max= 118, avg=68.58, stdev=34.01, samples=19 00:16:55.411 lat (msec) : 10=13.59%, 20=28.23%, 50=6.23%, 100=34.61%, 250=16.44% 00:16:55.411 lat (msec) : 500=0.90% 00:16:55.411 cpu : usr=0.56%, sys=0.33%, ctx=2199, majf=0, minf=3 00:16:55.411 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.411 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.411 issued rwts: total=640,692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.411 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.411 job20: (groupid=0, jobs=1): err= 0: pid=75042: Wed Jul 24 05:06:09 2024 00:16:55.411 read: IOPS=101, BW=12.7MiB/s (13.3MB/s)(107MiB/8458msec) 00:16:55.411 slat (usec): min=7, max=1581, avg=53.14, stdev=109.22 00:16:55.411 clat (msec): min=2, max=126, avg=12.33, stdev=16.04 00:16:55.411 lat (msec): min=3, max=126, avg=12.38, stdev=16.04 00:16:55.411 clat percentiles (msec): 00:16:55.411 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 6], 00:16:55.411 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:16:55.411 | 70.00th=[ 11], 80.00th=[ 13], 90.00th=[ 20], 95.00th=[ 34], 00:16:55.411 | 99.00th=[ 105], 99.50th=[ 112], 99.90th=[ 127], 99.95th=[ 127], 00:16:55.411 | 99.99th=[ 127] 00:16:55.411 write: IOPS=110, BW=13.8MiB/s (14.5MB/s)(120MiB/8664msec); 0 zone resets 00:16:55.411 slat (usec): min=48, max=5349, avg=142.36, stdev=263.38 00:16:55.411 clat (msec): min=32, max=254, avg=71.25, stdev=28.24 00:16:55.411 lat (msec): min=37, max=255, avg=71.39, stdev=28.23 00:16:55.411 clat percentiles (msec): 00:16:55.411 | 1.00th=[ 41], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 51], 00:16:55.411 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 66], 60.00th=[ 70], 00:16:55.411 | 70.00th=[ 78], 80.00th=[ 87], 90.00th=[ 101], 95.00th=[ 115], 00:16:55.411 | 99.00th=[ 192], 99.50th=[ 228], 99.90th=[ 255], 99.95th=[ 255], 00:16:55.411 | 99.99th=[ 255] 00:16:55.411 bw ( KiB/s): min= 3065, max=18944, per=1.14%, avg=12138.00, stdev=4650.29, samples=19 00:16:55.411 iops : min= 23, max= 148, avg=94.63, stdev=36.55, samples=19 00:16:55.411 lat (msec) : 4=3.30%, 10=28.40%, 20=10.84%, 50=12.38%, 100=39.02% 00:16:55.411 lat (msec) : 250=6.00%, 500=0.06% 00:16:55.411 cpu : usr=0.69%, sys=0.36%, ctx=2915, majf=0, minf=5 00:16:55.411 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.411 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.411 issued rwts: total=857,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.411 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.411 job21: (groupid=0, jobs=1): err= 0: pid=75046: Wed Jul 24 05:06:09 2024 00:16:55.411 read: IOPS=108, BW=13.6MiB/s (14.2MB/s)(123MiB/9079msec) 00:16:55.411 slat (usec): min=6, max=4704, avg=48.91, stdev=189.92 00:16:55.411 clat (usec): min=3840, max=89732, avg=9688.93, stdev=8954.74 00:16:55.411 lat (usec): min=3860, max=89789, avg=9737.84, stdev=8974.62 00:16:55.411 clat percentiles (usec): 00:16:55.411 | 1.00th=[ 4359], 5.00th=[ 4817], 10.00th=[ 5145], 20.00th=[ 5800], 00:16:55.411 | 30.00th=[ 6390], 40.00th=[ 6980], 50.00th=[ 7701], 60.00th=[ 8848], 00:16:55.411 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[13960], 95.00th=[17171], 00:16:55.411 | 99.00th=[60556], 99.50th=[84411], 99.90th=[89654], 99.95th=[89654], 00:16:55.411 | 99.99th=[89654] 00:16:55.411 write: IOPS=127, BW=15.9MiB/s (16.7MB/s)(140MiB/8793msec); 0 zone resets 00:16:55.411 slat (usec): min=35, max=32006, avg=149.48, stdev=988.11 00:16:55.411 clat (usec): min=1766, max=207430, avg=62092.20, stdev=24803.72 00:16:55.411 lat (msec): min=2, max=207, avg=62.24, stdev=24.77 00:16:55.411 clat percentiles (msec): 00:16:55.411 | 1.00th=[ 10], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 46], 00:16:55.411 | 30.00th=[ 49], 40.00th=[ 52], 50.00th=[ 55], 60.00th=[ 60], 00:16:55.411 | 70.00th=[ 66], 80.00th=[ 75], 90.00th=[ 93], 95.00th=[ 113], 00:16:55.411 | 99.00th=[ 153], 99.50th=[ 176], 99.90th=[ 182], 99.95th=[ 207], 00:16:55.411 | 99.99th=[ 207] 00:16:55.411 bw ( KiB/s): min= 3328, max=22272, per=1.35%, avg=14378.84, stdev=5823.60, samples=19 00:16:55.411 iops : min= 26, max= 174, avg=112.11, stdev=45.47, samples=19 00:16:55.411 lat (msec) : 2=0.05%, 4=0.19%, 10=33.63%, 20=12.30%, 50=18.95% 00:16:55.411 lat (msec) : 100=30.59%, 250=4.28% 00:16:55.411 cpu : usr=0.79%, sys=0.45%, ctx=3085, majf=0, minf=5 00:16:55.411 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.411 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.411 issued rwts: total=985,1120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.411 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.411 job22: (groupid=0, jobs=1): err= 0: pid=75048: Wed Jul 24 05:06:09 2024 00:16:55.411 read: IOPS=109, BW=13.7MiB/s (14.4MB/s)(120MiB/8732msec) 00:16:55.411 slat (usec): min=7, max=2054, avg=41.37, stdev=92.95 00:16:55.412 clat (msec): min=2, max=100, avg=12.55, stdev=13.02 00:16:55.412 lat (msec): min=3, max=100, avg=12.59, stdev=13.02 00:16:55.412 clat percentiles (msec): 00:16:55.412 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:16:55.412 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:16:55.412 | 70.00th=[ 12], 80.00th=[ 16], 90.00th=[ 24], 95.00th=[ 35], 00:16:55.412 | 99.00th=[ 79], 99.50th=[ 83], 99.90th=[ 101], 99.95th=[ 101], 00:16:55.412 | 99.99th=[ 101] 00:16:55.412 write: IOPS=113, BW=14.1MiB/s (14.8MB/s)(121MiB/8557msec); 0 zone resets 00:16:55.412 slat (usec): min=40, max=50025, avg=195.88, stdev=1623.74 00:16:55.412 clat (msec): min=2, max=232, avg=70.04, stdev=29.52 00:16:55.412 lat (msec): min=2, max=233, avg=70.24, stdev=29.47 00:16:55.412 clat percentiles (msec): 00:16:55.412 | 1.00th=[ 33], 5.00th=[ 43], 10.00th=[ 45], 20.00th=[ 50], 00:16:55.412 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 63], 60.00th=[ 68], 00:16:55.412 | 70.00th=[ 78], 80.00th=[ 87], 90.00th=[ 101], 95.00th=[ 114], 00:16:55.412 | 99.00th=[ 203], 99.50th=[ 220], 99.90th=[ 234], 99.95th=[ 234], 00:16:55.412 | 99.99th=[ 234] 00:16:55.412 bw ( KiB/s): min= 2308, max=19968, per=1.15%, avg=12237.80, stdev=4920.28, samples=20 00:16:55.412 iops : min= 18, max= 156, avg=95.30, stdev=38.44, samples=20 00:16:55.412 lat (msec) : 4=2.33%, 10=28.58%, 20=12.45%, 50=15.82%, 100=35.68% 00:16:55.412 lat (msec) : 250=5.13% 00:16:55.412 cpu : usr=0.72%, sys=0.40%, ctx=3036, majf=0, minf=3 00:16:55.412 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.412 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.412 issued rwts: total=960,968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.412 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.412 job23: (groupid=0, jobs=1): err= 0: pid=75049: Wed Jul 24 05:06:09 2024 00:16:55.412 read: IOPS=106, BW=13.3MiB/s (14.0MB/s)(120MiB/9017msec) 00:16:55.412 slat (usec): min=7, max=745, avg=45.43, stdev=77.46 00:16:55.412 clat (msec): min=3, max=107, avg=15.23, stdev=13.05 00:16:55.412 lat (msec): min=3, max=107, avg=15.28, stdev=13.05 00:16:55.412 clat percentiles (msec): 00:16:55.412 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:16:55.412 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 14], 00:16:55.412 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 22], 95.00th=[ 31], 00:16:55.412 | 99.00th=[ 82], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 108], 00:16:55.412 | 99.99th=[ 108] 00:16:55.412 write: IOPS=122, BW=15.4MiB/s (16.1MB/s)(126MiB/8191msec); 0 zone resets 00:16:55.412 slat (usec): min=28, max=2838, avg=130.44, stdev=211.41 00:16:55.412 clat (msec): min=17, max=253, avg=64.55, stdev=29.48 00:16:55.412 lat (msec): min=17, max=253, avg=64.68, stdev=29.50 00:16:55.412 clat percentiles (msec): 00:16:55.412 | 1.00th=[ 38], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 46], 00:16:55.412 | 30.00th=[ 49], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 61], 00:16:55.412 | 70.00th=[ 68], 80.00th=[ 77], 90.00th=[ 93], 95.00th=[ 122], 00:16:55.412 | 99.00th=[ 199], 99.50th=[ 236], 99.90th=[ 253], 99.95th=[ 253], 00:16:55.412 | 99.99th=[ 253] 00:16:55.412 bw ( KiB/s): min= 2560, max=22060, per=1.20%, avg=12787.35, stdev=6625.36, samples=20 00:16:55.412 iops : min= 20, max= 172, avg=99.75, stdev=51.87, samples=20 00:16:55.412 lat (msec) : 4=0.31%, 10=14.09%, 20=29.35%, 50=21.52%, 100=30.47% 00:16:55.412 lat (msec) : 250=4.12%, 500=0.15% 00:16:55.412 cpu : usr=0.74%, sys=0.40%, ctx=3209, majf=0, minf=5 00:16:55.412 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.412 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.412 issued rwts: total=960,1006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.412 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.412 job24: (groupid=0, jobs=1): err= 0: pid=75050: Wed Jul 24 05:06:09 2024 00:16:55.412 read: IOPS=106, BW=13.3MiB/s (14.0MB/s)(120MiB/9015msec) 00:16:55.412 slat (usec): min=7, max=2044, avg=63.30, stdev=135.88 00:16:55.412 clat (msec): min=2, max=142, avg=15.68, stdev=17.35 00:16:55.412 lat (msec): min=3, max=142, avg=15.74, stdev=17.34 00:16:55.412 clat percentiles (msec): 00:16:55.412 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 8], 00:16:55.412 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:16:55.412 | 70.00th=[ 15], 80.00th=[ 19], 90.00th=[ 25], 95.00th=[ 47], 00:16:55.412 | 99.00th=[ 99], 99.50th=[ 129], 99.90th=[ 142], 99.95th=[ 142], 00:16:55.412 | 99.99th=[ 142] 00:16:55.412 write: IOPS=119, BW=15.0MiB/s (15.7MB/s)(122MiB/8127msec); 0 zone resets 00:16:55.412 slat (usec): min=40, max=5191, avg=123.68, stdev=255.48 00:16:55.412 clat (msec): min=23, max=261, avg=66.20, stdev=28.91 00:16:55.412 lat (msec): min=23, max=261, avg=66.32, stdev=28.90 00:16:55.412 clat percentiles (msec): 00:16:55.412 | 1.00th=[ 39], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 46], 00:16:55.412 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 65], 00:16:55.412 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 99], 95.00th=[ 110], 00:16:55.412 | 99.00th=[ 174], 99.50th=[ 253], 99.90th=[ 262], 99.95th=[ 262], 00:16:55.412 | 99.99th=[ 262] 00:16:55.412 bw ( KiB/s): min= 1788, max=20992, per=1.16%, avg=12374.40, stdev=6161.05, samples=20 00:16:55.412 iops : min= 13, max= 164, avg=96.50, stdev=48.29, samples=20 00:16:55.412 lat (msec) : 4=0.98%, 10=20.06%, 20=21.04%, 50=21.35%, 100=31.49% 00:16:55.412 lat (msec) : 250=4.76%, 500=0.31% 00:16:55.412 cpu : usr=0.77%, sys=0.36%, ctx=3024, majf=0, minf=1 00:16:55.412 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.412 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.412 issued rwts: total=960,974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.412 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.412 job25: (groupid=0, jobs=1): err= 0: pid=75051: Wed Jul 24 05:06:09 2024 00:16:55.412 read: IOPS=105, BW=13.2MiB/s (13.8MB/s)(120MiB/9117msec) 00:16:55.412 slat (usec): min=7, max=1062, avg=47.26, stdev=100.83 00:16:55.412 clat (usec): min=3775, max=83288, avg=10874.40, stdev=7227.77 00:16:55.412 lat (usec): min=4346, max=83297, avg=10921.65, stdev=7231.63 00:16:55.412 clat percentiles (usec): 00:16:55.412 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5604], 20.00th=[ 6325], 00:16:55.412 | 30.00th=[ 7570], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[10421], 00:16:55.412 | 70.00th=[11469], 80.00th=[13173], 90.00th=[16909], 95.00th=[19792], 00:16:55.412 | 99.00th=[27919], 99.50th=[76022], 99.90th=[83362], 99.95th=[83362], 00:16:55.412 | 99.99th=[83362] 00:16:55.412 write: IOPS=126, BW=15.8MiB/s (16.6MB/s)(138MiB/8730msec); 0 zone resets 00:16:55.412 slat (usec): min=41, max=3674, avg=125.66, stdev=188.07 00:16:55.412 clat (usec): min=927, max=198487, avg=62632.36, stdev=24892.23 00:16:55.412 lat (usec): min=1019, max=198540, avg=62758.03, stdev=24883.68 00:16:55.412 clat percentiles (msec): 00:16:55.412 | 1.00th=[ 14], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 46], 00:16:55.412 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 56], 60.00th=[ 60], 00:16:55.412 | 70.00th=[ 65], 80.00th=[ 73], 90.00th=[ 99], 95.00th=[ 116], 00:16:55.412 | 99.00th=[ 148], 99.50th=[ 159], 99.90th=[ 190], 99.95th=[ 199], 00:16:55.412 | 99.99th=[ 199] 00:16:55.412 bw ( KiB/s): min= 4096, max=22016, per=1.31%, avg=14034.40, stdev=6229.65, samples=20 00:16:55.412 iops : min= 32, max= 172, avg=109.55, stdev=48.72, samples=20 00:16:55.412 lat (usec) : 1000=0.05% 00:16:55.412 lat (msec) : 4=0.15%, 10=25.58%, 20=19.23%, 50=19.14%, 100=30.96% 00:16:55.412 lat (msec) : 250=4.89% 00:16:55.412 cpu : usr=0.76%, sys=0.45%, ctx=3218, majf=0, minf=9 00:16:55.412 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.412 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.412 issued rwts: total=960,1104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.412 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.412 job26: (groupid=0, jobs=1): err= 0: pid=75052: Wed Jul 24 05:06:09 2024 00:16:55.412 read: IOPS=105, BW=13.2MiB/s (13.8MB/s)(120MiB/9098msec) 00:16:55.412 slat (usec): min=7, max=2432, avg=56.06, stdev=131.93 00:16:55.412 clat (usec): min=3008, max=94835, avg=13203.25, stdev=12850.14 00:16:55.412 lat (usec): min=3099, max=94849, avg=13259.31, stdev=12849.39 00:16:55.412 clat percentiles (usec): 00:16:55.412 | 1.00th=[ 3687], 5.00th=[ 4293], 10.00th=[ 5407], 20.00th=[ 6521], 00:16:55.413 | 30.00th=[ 7242], 40.00th=[ 8094], 50.00th=[ 9634], 60.00th=[10683], 00:16:55.413 | 70.00th=[12256], 80.00th=[15401], 90.00th=[21627], 95.00th=[39060], 00:16:55.413 | 99.00th=[74974], 99.50th=[87557], 99.90th=[94897], 99.95th=[94897], 00:16:55.413 | 99.99th=[94897] 00:16:55.413 write: IOPS=122, BW=15.3MiB/s (16.1MB/s)(130MiB/8450msec); 0 zone resets 00:16:55.413 slat (usec): min=47, max=12775, avg=146.84, stdev=439.31 00:16:55.413 clat (msec): min=10, max=191, avg=64.49, stdev=25.22 00:16:55.413 lat (msec): min=10, max=191, avg=64.63, stdev=25.20 00:16:55.413 clat percentiles (msec): 00:16:55.413 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 46], 00:16:55.413 | 30.00th=[ 50], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 62], 00:16:55.413 | 70.00th=[ 69], 80.00th=[ 78], 90.00th=[ 95], 95.00th=[ 117], 00:16:55.413 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 182], 99.95th=[ 192], 00:16:55.413 | 99.99th=[ 192] 00:16:55.413 bw ( KiB/s): min= 3328, max=22060, per=1.23%, avg=13167.40, stdev=5603.92, samples=20 00:16:55.413 iops : min= 26, max= 172, avg=102.75, stdev=43.77, samples=20 00:16:55.413 lat (msec) : 4=1.55%, 10=23.95%, 20=17.43%, 50=20.04%, 100=32.67% 00:16:55.413 lat (msec) : 250=4.36% 00:16:55.413 cpu : usr=0.85%, sys=0.31%, ctx=3184, majf=0, minf=3 00:16:55.413 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.413 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.413 issued rwts: total=960,1036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.413 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.413 job27: (groupid=0, jobs=1): err= 0: pid=75053: Wed Jul 24 05:06:09 2024 00:16:55.413 read: IOPS=102, BW=12.8MiB/s (13.5MB/s)(120MiB/9340msec) 00:16:55.413 slat (usec): min=7, max=1069, avg=45.42, stdev=92.58 00:16:55.413 clat (msec): min=2, max=196, avg=10.26, stdev=17.23 00:16:55.413 lat (msec): min=2, max=196, avg=10.31, stdev=17.22 00:16:55.413 clat percentiles (msec): 00:16:55.413 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 6], 00:16:55.413 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:16:55.413 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 14], 95.00th=[ 18], 00:16:55.413 | 99.00th=[ 40], 99.50th=[ 192], 99.90th=[ 197], 99.95th=[ 197], 00:16:55.413 | 99.99th=[ 197] 00:16:55.413 write: IOPS=126, BW=15.9MiB/s (16.6MB/s)(140MiB/8808msec); 0 zone resets 00:16:55.413 slat (usec): min=42, max=19482, avg=141.36, stdev=608.31 00:16:55.413 clat (usec): min=816, max=199628, avg=62524.38, stdev=27062.92 00:16:55.413 lat (usec): min=1740, max=199687, avg=62665.74, stdev=27064.41 00:16:55.413 clat percentiles (msec): 00:16:55.413 | 1.00th=[ 4], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 45], 00:16:55.413 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 56], 60.00th=[ 61], 00:16:55.413 | 70.00th=[ 68], 80.00th=[ 81], 90.00th=[ 99], 95.00th=[ 116], 00:16:55.413 | 99.00th=[ 153], 99.50th=[ 165], 99.90th=[ 192], 99.95th=[ 201], 00:16:55.413 | 99.99th=[ 201] 00:16:55.413 bw ( KiB/s): min= 5554, max=31232, per=1.33%, avg=14185.75, stdev=6124.51, samples=20 00:16:55.413 iops : min= 43, max= 244, avg=110.60, stdev=47.88, samples=20 00:16:55.413 lat (usec) : 1000=0.05% 00:16:55.413 lat (msec) : 2=0.05%, 4=3.56%, 10=32.39%, 20=10.15%, 50=18.67% 00:16:55.413 lat (msec) : 100=29.69%, 250=5.44% 00:16:55.413 cpu : usr=0.85%, sys=0.38%, ctx=3165, majf=0, minf=1 00:16:55.413 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.413 complete : 0=0.0%, 4=99.4%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.413 issued rwts: total=960,1118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.413 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.413 job28: (groupid=0, jobs=1): err= 0: pid=75054: Wed Jul 24 05:06:09 2024 00:16:55.413 read: IOPS=107, BW=13.4MiB/s (14.0MB/s)(120MiB/8958msec) 00:16:55.413 slat (usec): min=7, max=1996, avg=52.61, stdev=110.76 00:16:55.413 clat (usec): min=3365, max=42327, avg=12460.20, stdev=6267.20 00:16:55.413 lat (usec): min=3386, max=42335, avg=12512.82, stdev=6269.48 00:16:55.413 clat percentiles (usec): 00:16:55.413 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 8094], 00:16:55.413 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10552], 60.00th=[11994], 00:16:55.413 | 70.00th=[13829], 80.00th=[16450], 90.00th=[20579], 95.00th=[23987], 00:16:55.413 | 99.00th=[36439], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:16:55.413 | 99.99th=[42206] 00:16:55.413 write: IOPS=124, BW=15.6MiB/s (16.3MB/s)(133MiB/8511msec); 0 zone resets 00:16:55.413 slat (usec): min=44, max=2822, avg=123.97, stdev=173.51 00:16:55.413 clat (msec): min=37, max=366, avg=63.55, stdev=33.32 00:16:55.413 lat (msec): min=37, max=366, avg=63.68, stdev=33.32 00:16:55.413 clat percentiles (msec): 00:16:55.413 | 1.00th=[ 39], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 45], 00:16:55.413 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 54], 60.00th=[ 58], 00:16:55.413 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 94], 95.00th=[ 123], 00:16:55.413 | 99.00th=[ 190], 99.50th=[ 288], 99.90th=[ 355], 99.95th=[ 368], 00:16:55.413 | 99.99th=[ 368] 00:16:55.413 bw ( KiB/s): min= 2048, max=21972, per=1.26%, avg=13486.80, stdev=6866.05, samples=20 00:16:55.413 iops : min= 16, max= 171, avg=105.25, stdev=53.65, samples=20 00:16:55.413 lat (msec) : 4=0.10%, 10=20.68%, 20=21.43%, 50=25.63%, 100=27.66% 00:16:55.413 lat (msec) : 250=4.06%, 500=0.45% 00:16:55.413 cpu : usr=0.85%, sys=0.32%, ctx=3205, majf=0, minf=3 00:16:55.413 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.413 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.413 issued rwts: total=960,1061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.413 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.413 job29: (groupid=0, jobs=1): err= 0: pid=75055: Wed Jul 24 05:06:09 2024 00:16:55.413 read: IOPS=95, BW=12.0MiB/s (12.6MB/s)(100MiB/8344msec) 00:16:55.413 slat (usec): min=8, max=692, avg=46.42, stdev=82.48 00:16:55.413 clat (msec): min=2, max=124, avg=14.83, stdev=15.37 00:16:55.413 lat (msec): min=2, max=124, avg=14.87, stdev=15.37 00:16:55.413 clat percentiles (msec): 00:16:55.413 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:16:55.413 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 11], 60.00th=[ 12], 00:16:55.413 | 70.00th=[ 14], 80.00th=[ 18], 90.00th=[ 25], 95.00th=[ 40], 00:16:55.413 | 99.00th=[ 81], 99.50th=[ 123], 99.90th=[ 125], 99.95th=[ 125], 00:16:55.413 | 99.99th=[ 125] 00:16:55.413 write: IOPS=111, BW=13.9MiB/s (14.6MB/s)(119MiB/8553msec); 0 zone resets 00:16:55.413 slat (usec): min=50, max=28873, avg=167.93, stdev=965.22 00:16:55.413 clat (msec): min=37, max=168, avg=71.05, stdev=23.98 00:16:55.413 lat (msec): min=37, max=168, avg=71.22, stdev=23.97 00:16:55.413 clat percentiles (msec): 00:16:55.413 | 1.00th=[ 42], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:16:55.413 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 71], 00:16:55.413 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 105], 95.00th=[ 124], 00:16:55.413 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 169], 00:16:55.413 | 99.99th=[ 169] 00:16:55.413 bw ( KiB/s): min= 1792, max=17408, per=1.13%, avg=12087.00, stdev=5119.06, samples=20 00:16:55.413 iops : min= 14, max= 136, avg=94.30, stdev=40.00, samples=20 00:16:55.413 lat (msec) : 4=0.23%, 10=22.20%, 20=16.89%, 50=14.16%, 100=39.10% 00:16:55.413 lat (msec) : 250=7.42% 00:16:55.413 cpu : usr=0.78%, sys=0.28%, ctx=2838, majf=0, minf=5 00:16:55.413 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.413 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.413 issued rwts: total=800,952,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.413 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.413 job30: (groupid=0, jobs=1): err= 0: pid=75056: Wed Jul 24 05:06:09 2024 00:16:55.413 read: IOPS=77, BW=9952KiB/s (10.2MB/s)(77.0MiB/7923msec) 00:16:55.413 slat (usec): min=7, max=626, avg=43.03, stdev=76.63 00:16:55.413 clat (msec): min=4, max=103, avg=17.59, stdev=16.65 00:16:55.413 lat (msec): min=4, max=103, avg=17.64, stdev=16.65 00:16:55.413 clat percentiles (msec): 00:16:55.413 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 10], 00:16:55.413 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 15], 00:16:55.413 | 70.00th=[ 18], 80.00th=[ 22], 90.00th=[ 30], 95.00th=[ 42], 00:16:55.413 | 99.00th=[ 99], 99.50th=[ 100], 99.90th=[ 104], 99.95th=[ 104], 00:16:55.413 | 99.99th=[ 104] 00:16:55.413 write: IOPS=74, BW=9477KiB/s (9705kB/s)(80.0MiB/8644msec); 0 zone resets 00:16:55.413 slat (usec): min=39, max=24447, avg=170.50, stdev=980.73 00:16:55.413 clat (msec): min=54, max=336, avg=107.03, stdev=38.31 00:16:55.413 lat (msec): min=54, max=336, avg=107.20, stdev=38.26 00:16:55.413 clat percentiles (msec): 00:16:55.413 | 1.00th=[ 61], 5.00th=[ 63], 10.00th=[ 66], 20.00th=[ 72], 00:16:55.413 | 30.00th=[ 80], 40.00th=[ 91], 50.00th=[ 101], 60.00th=[ 111], 00:16:55.413 | 70.00th=[ 125], 80.00th=[ 136], 90.00th=[ 153], 95.00th=[ 176], 00:16:55.413 | 99.00th=[ 230], 99.50th=[ 266], 99.90th=[ 338], 99.95th=[ 338], 00:16:55.413 | 99.99th=[ 338] 00:16:55.413 bw ( KiB/s): min= 1792, max=13568, per=0.80%, avg=8524.11, stdev=2962.63, samples=19 00:16:55.413 iops : min= 14, max= 106, avg=66.53, stdev=23.15, samples=19 00:16:55.413 lat (msec) : 10=14.73%, 20=22.93%, 50=9.39%, 100=27.23%, 250=25.40% 00:16:55.413 lat (msec) : 500=0.32% 00:16:55.413 cpu : usr=0.55%, sys=0.18%, ctx=1991, majf=0, minf=3 00:16:55.413 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.413 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.413 issued rwts: total=616,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.413 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.413 job31: (groupid=0, jobs=1): err= 0: pid=75057: Wed Jul 24 05:06:09 2024 00:16:55.413 read: IOPS=74, BW=9521KiB/s (9750kB/s)(80.0MiB/8604msec) 00:16:55.413 slat (usec): min=8, max=1385, avg=48.54, stdev=105.89 00:16:55.413 clat (msec): min=5, max=123, avg=18.78, stdev=16.28 00:16:55.413 lat (msec): min=5, max=123, avg=18.83, stdev=16.28 00:16:55.413 clat percentiles (msec): 00:16:55.414 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:16:55.414 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 17], 00:16:55.414 | 70.00th=[ 19], 80.00th=[ 23], 90.00th=[ 29], 95.00th=[ 51], 00:16:55.414 | 99.00th=[ 111], 99.50th=[ 117], 99.90th=[ 124], 99.95th=[ 124], 00:16:55.414 | 99.99th=[ 124] 00:16:55.414 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(94.2MiB/8524msec); 0 zone resets 00:16:55.414 slat (usec): min=43, max=7068, avg=139.52, stdev=333.16 00:16:55.414 clat (msec): min=14, max=405, avg=89.81, stdev=39.39 00:16:55.414 lat (msec): min=14, max=406, avg=89.95, stdev=39.38 00:16:55.414 clat percentiles (msec): 00:16:55.414 | 1.00th=[ 22], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 66], 00:16:55.414 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 84], 00:16:55.414 | 70.00th=[ 94], 80.00th=[ 112], 90.00th=[ 136], 95.00th=[ 169], 00:16:55.414 | 99.00th=[ 241], 99.50th=[ 255], 99.90th=[ 405], 99.95th=[ 405], 00:16:55.414 | 99.99th=[ 405] 00:16:55.414 bw ( KiB/s): min= 766, max=15872, per=0.94%, avg=10037.53, stdev=4233.83, samples=19 00:16:55.414 iops : min= 5, max= 124, avg=78.05, stdev=33.12, samples=19 00:16:55.414 lat (msec) : 10=11.12%, 20=22.96%, 50=11.12%, 100=40.24%, 250=14.13% 00:16:55.414 lat (msec) : 500=0.43% 00:16:55.414 cpu : usr=0.66%, sys=0.17%, ctx=2157, majf=0, minf=1 00:16:55.414 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.414 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.414 issued rwts: total=640,754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.414 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.414 job32: (groupid=0, jobs=1): err= 0: pid=75058: Wed Jul 24 05:06:09 2024 00:16:55.414 read: IOPS=82, BW=10.3MiB/s (10.8MB/s)(80.0MiB/7749msec) 00:16:55.414 slat (usec): min=7, max=4179, avg=61.48, stdev=215.41 00:16:55.414 clat (msec): min=4, max=142, avg=14.48, stdev=15.46 00:16:55.414 lat (msec): min=4, max=142, avg=14.54, stdev=15.47 00:16:55.414 clat percentiles (msec): 00:16:55.414 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:16:55.414 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:16:55.414 | 70.00th=[ 14], 80.00th=[ 19], 90.00th=[ 24], 95.00th=[ 31], 00:16:55.414 | 99.00th=[ 120], 99.50th=[ 128], 99.90th=[ 142], 99.95th=[ 142], 00:16:55.414 | 99.99th=[ 142] 00:16:55.414 write: IOPS=73, BW=9411KiB/s (9637kB/s)(81.4MiB/8854msec); 0 zone resets 00:16:55.414 slat (usec): min=37, max=6920, avg=154.89, stdev=333.66 00:16:55.414 clat (msec): min=57, max=319, avg=108.03, stdev=44.21 00:16:55.414 lat (msec): min=57, max=319, avg=108.18, stdev=44.24 00:16:55.414 clat percentiles (msec): 00:16:55.414 | 1.00th=[ 59], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 69], 00:16:55.414 | 30.00th=[ 78], 40.00th=[ 86], 50.00th=[ 97], 60.00th=[ 110], 00:16:55.414 | 70.00th=[ 123], 80.00th=[ 142], 90.00th=[ 174], 95.00th=[ 194], 00:16:55.414 | 99.00th=[ 245], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 321], 00:16:55.414 | 99.99th=[ 321] 00:16:55.414 bw ( KiB/s): min= 1792, max=13824, per=0.79%, avg=8485.84, stdev=3498.66, samples=19 00:16:55.414 iops : min= 14, max= 108, avg=66.16, stdev=27.36, samples=19 00:16:55.414 lat (msec) : 10=21.84%, 20=20.37%, 50=6.12%, 100=27.03%, 250=24.24% 00:16:55.414 lat (msec) : 500=0.39% 00:16:55.414 cpu : usr=0.42%, sys=0.33%, ctx=2101, majf=0, minf=11 00:16:55.414 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.414 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.414 issued rwts: total=640,651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.414 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.414 job33: (groupid=0, jobs=1): err= 0: pid=75059: Wed Jul 24 05:06:09 2024 00:16:55.414 read: IOPS=75, BW=9660KiB/s (9892kB/s)(80.0MiB/8480msec) 00:16:55.414 slat (usec): min=7, max=3533, avg=68.14, stdev=190.00 00:16:55.414 clat (usec): min=6394, max=83717, avg=19641.91, stdev=12776.61 00:16:55.414 lat (usec): min=6824, max=83737, avg=19710.05, stdev=12775.89 00:16:55.414 clat percentiles (usec): 00:16:55.414 | 1.00th=[ 7504], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[11863], 00:16:55.414 | 30.00th=[13042], 40.00th=[13960], 50.00th=[15926], 60.00th=[18482], 00:16:55.414 | 70.00th=[21365], 80.00th=[23725], 90.00th=[29754], 95.00th=[46924], 00:16:55.414 | 99.00th=[80217], 99.50th=[81265], 99.90th=[83362], 99.95th=[83362], 00:16:55.414 | 99.99th=[83362] 00:16:55.414 write: IOPS=81, BW=10.2MiB/s (10.7MB/s)(86.1MiB/8452msec); 0 zone resets 00:16:55.414 slat (usec): min=34, max=3102, avg=132.63, stdev=226.80 00:16:55.414 clat (msec): min=25, max=479, avg=97.22, stdev=52.98 00:16:55.414 lat (msec): min=25, max=479, avg=97.35, stdev=52.97 00:16:55.414 clat percentiles (msec): 00:16:55.414 | 1.00th=[ 30], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 67], 00:16:55.414 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 86], 00:16:55.414 | 70.00th=[ 100], 80.00th=[ 123], 90.00th=[ 150], 95.00th=[ 188], 00:16:55.414 | 99.00th=[ 359], 99.50th=[ 426], 99.90th=[ 481], 99.95th=[ 481], 00:16:55.414 | 99.99th=[ 481] 00:16:55.414 bw ( KiB/s): min= 256, max=14848, per=0.82%, avg=8727.00, stdev=4752.93, samples=20 00:16:55.414 iops : min= 2, max= 116, avg=68.00, stdev=37.09, samples=20 00:16:55.414 lat (msec) : 10=6.17%, 20=25.21%, 50=15.35%, 100=38.45%, 250=13.77% 00:16:55.414 lat (msec) : 500=1.05% 00:16:55.414 cpu : usr=0.54%, sys=0.24%, ctx=2101, majf=0, minf=3 00:16:55.414 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.414 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.414 issued rwts: total=640,689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.414 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.414 job34: (groupid=0, jobs=1): err= 0: pid=75060: Wed Jul 24 05:06:09 2024 00:16:55.414 read: IOPS=60, BW=7694KiB/s (7879kB/s)(60.0MiB/7985msec) 00:16:55.414 slat (usec): min=8, max=8745, avg=84.12, stdev=514.93 00:16:55.414 clat (usec): min=4886, max=99111, avg=16352.25, stdev=15878.02 00:16:55.414 lat (usec): min=4902, max=99120, avg=16436.37, stdev=15895.11 00:16:55.414 clat percentiles (usec): 00:16:55.414 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5866], 20.00th=[ 7046], 00:16:55.414 | 30.00th=[ 8586], 40.00th=[10028], 50.00th=[12256], 60.00th=[14091], 00:16:55.414 | 70.00th=[16450], 80.00th=[19530], 90.00th=[26870], 95.00th=[42730], 00:16:55.414 | 99.00th=[94897], 99.50th=[98042], 99.90th=[99091], 99.95th=[99091], 00:16:55.414 | 99.99th=[99091] 00:16:55.414 write: IOPS=69, BW=8924KiB/s (9138kB/s)(79.0MiB/9065msec); 0 zone resets 00:16:55.414 slat (usec): min=38, max=24037, avg=167.85, stdev=966.29 00:16:55.414 clat (msec): min=54, max=438, avg=113.66, stdev=50.95 00:16:55.414 lat (msec): min=55, max=438, avg=113.83, stdev=50.92 00:16:55.414 clat percentiles (msec): 00:16:55.414 | 1.00th=[ 59], 5.00th=[ 65], 10.00th=[ 68], 20.00th=[ 75], 00:16:55.414 | 30.00th=[ 83], 40.00th=[ 92], 50.00th=[ 105], 60.00th=[ 117], 00:16:55.414 | 70.00th=[ 126], 80.00th=[ 140], 90.00th=[ 165], 95.00th=[ 182], 00:16:55.414 | 99.00th=[ 363], 99.50th=[ 380], 99.90th=[ 439], 99.95th=[ 439], 00:16:55.414 | 99.99th=[ 439] 00:16:55.414 bw ( KiB/s): min= 512, max=12544, per=0.75%, avg=7993.45, stdev=3551.02, samples=20 00:16:55.414 iops : min= 4, max= 98, avg=62.35, stdev=27.69, samples=20 00:16:55.414 lat (msec) : 10=17.36%, 20=17.63%, 50=6.56%, 100=27.88%, 250=29.32% 00:16:55.414 lat (msec) : 500=1.26% 00:16:55.414 cpu : usr=0.48%, sys=0.18%, ctx=1852, majf=0, minf=5 00:16:55.414 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.414 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.414 issued rwts: total=480,632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.414 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.414 job35: (groupid=0, jobs=1): err= 0: pid=75061: Wed Jul 24 05:06:09 2024 00:16:55.414 read: IOPS=77, BW=9949KiB/s (10.2MB/s)(80.0MiB/8234msec) 00:16:55.414 slat (usec): min=8, max=2147, avg=59.86, stdev=137.76 00:16:55.414 clat (msec): min=8, max=193, avg=20.57, stdev=18.38 00:16:55.414 lat (msec): min=8, max=193, avg=20.63, stdev=18.38 00:16:55.414 clat percentiles (msec): 00:16:55.414 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 13], 00:16:55.414 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 19], 60.00th=[ 20], 00:16:55.414 | 70.00th=[ 21], 80.00th=[ 23], 90.00th=[ 29], 95.00th=[ 38], 00:16:55.414 | 99.00th=[ 169], 99.50th=[ 182], 99.90th=[ 194], 99.95th=[ 194], 00:16:55.414 | 99.99th=[ 194] 00:16:55.414 write: IOPS=86, BW=10.8MiB/s (11.4MB/s)(90.8MiB/8372msec); 0 zone resets 00:16:55.414 slat (usec): min=50, max=16410, avg=152.20, stdev=636.38 00:16:55.414 clat (msec): min=35, max=318, avg=91.21, stdev=37.18 00:16:55.414 lat (msec): min=36, max=318, avg=91.36, stdev=37.17 00:16:55.414 clat percentiles (msec): 00:16:55.414 | 1.00th=[ 44], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 68], 00:16:55.414 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 84], 00:16:55.414 | 70.00th=[ 94], 80.00th=[ 111], 90.00th=[ 134], 95.00th=[ 163], 00:16:55.414 | 99.00th=[ 234], 99.50th=[ 268], 99.90th=[ 317], 99.95th=[ 317], 00:16:55.414 | 99.99th=[ 317] 00:16:55.414 bw ( KiB/s): min= 768, max=14336, per=0.86%, avg=9186.65, stdev=4781.91, samples=20 00:16:55.414 iops : min= 6, max= 112, avg=71.65, stdev=37.38, samples=20 00:16:55.414 lat (msec) : 10=2.42%, 20=27.67%, 50=16.62%, 100=39.68%, 250=13.25% 00:16:55.414 lat (msec) : 500=0.37% 00:16:55.414 cpu : usr=0.62%, sys=0.21%, ctx=2170, majf=0, minf=3 00:16:55.414 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.414 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.414 issued rwts: total=640,726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.414 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.414 job36: (groupid=0, jobs=1): err= 0: pid=75062: Wed Jul 24 05:06:09 2024 00:16:55.414 read: IOPS=76, BW=9757KiB/s (9991kB/s)(80.0MiB/8396msec) 00:16:55.414 slat (usec): min=7, max=874, avg=51.35, stdev=93.96 00:16:55.414 clat (msec): min=7, max=100, avg=18.42, stdev=11.75 00:16:55.415 lat (msec): min=7, max=100, avg=18.47, stdev=11.75 00:16:55.415 clat percentiles (msec): 00:16:55.415 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:16:55.415 | 30.00th=[ 12], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 19], 00:16:55.415 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 30], 95.00th=[ 37], 00:16:55.415 | 99.00th=[ 90], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 102], 00:16:55.415 | 99.99th=[ 102] 00:16:55.415 write: IOPS=86, BW=10.8MiB/s (11.3MB/s)(92.2MiB/8558msec); 0 zone resets 00:16:55.415 slat (usec): min=36, max=15486, avg=144.62, stdev=585.02 00:16:55.415 clat (msec): min=31, max=255, avg=91.82, stdev=38.49 00:16:55.415 lat (msec): min=32, max=255, avg=91.96, stdev=38.47 00:16:55.415 clat percentiles (msec): 00:16:55.415 | 1.00th=[ 45], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 67], 00:16:55.415 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 82], 00:16:55.415 | 70.00th=[ 92], 80.00th=[ 113], 90.00th=[ 157], 95.00th=[ 178], 00:16:55.415 | 99.00th=[ 230], 99.50th=[ 249], 99.90th=[ 255], 99.95th=[ 255], 00:16:55.415 | 99.99th=[ 255] 00:16:55.415 bw ( KiB/s): min= 1792, max=15584, per=0.88%, avg=9352.10, stdev=4547.92, samples=20 00:16:55.415 iops : min= 14, max= 121, avg=72.95, stdev=35.54, samples=20 00:16:55.415 lat (msec) : 10=6.10%, 20=26.85%, 50=13.21%, 100=40.42%, 250=13.28% 00:16:55.415 lat (msec) : 500=0.15% 00:16:55.415 cpu : usr=0.54%, sys=0.26%, ctx=2218, majf=0, minf=3 00:16:55.415 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.415 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.415 issued rwts: total=640,738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.415 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.415 job37: (groupid=0, jobs=1): err= 0: pid=75064: Wed Jul 24 05:06:09 2024 00:16:55.415 read: IOPS=77, BW=9948KiB/s (10.2MB/s)(80.0MiB/8235msec) 00:16:55.415 slat (usec): min=7, max=853, avg=44.63, stdev=78.97 00:16:55.415 clat (msec): min=4, max=273, avg=19.12, stdev=27.67 00:16:55.415 lat (msec): min=5, max=273, avg=19.17, stdev=27.67 00:16:55.415 clat percentiles (msec): 00:16:55.415 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:16:55.415 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 18], 00:16:55.415 | 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 28], 95.00th=[ 34], 00:16:55.415 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 275], 99.95th=[ 275], 00:16:55.415 | 99.99th=[ 275] 00:16:55.415 write: IOPS=85, BW=10.7MiB/s (11.2MB/s)(90.5MiB/8497msec); 0 zone resets 00:16:55.415 slat (usec): min=36, max=5923, avg=128.75, stdev=262.12 00:16:55.415 clat (msec): min=41, max=398, avg=92.96, stdev=43.27 00:16:55.415 lat (msec): min=41, max=399, avg=93.09, stdev=43.28 00:16:55.415 clat percentiles (msec): 00:16:55.415 | 1.00th=[ 52], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 67], 00:16:55.415 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 83], 00:16:55.415 | 70.00th=[ 97], 80.00th=[ 116], 90.00th=[ 133], 95.00th=[ 169], 00:16:55.415 | 99.00th=[ 279], 99.50th=[ 334], 99.90th=[ 401], 99.95th=[ 401], 00:16:55.415 | 99.99th=[ 401] 00:16:55.415 bw ( KiB/s): min= 768, max=15360, per=0.90%, avg=9648.89, stdev=4257.54, samples=19 00:16:55.415 iops : min= 6, max= 120, avg=75.16, stdev=33.17, samples=19 00:16:55.415 lat (msec) : 10=13.42%, 20=20.45%, 50=12.39%, 100=38.27%, 250=14.37% 00:16:55.415 lat (msec) : 500=1.10% 00:16:55.415 cpu : usr=0.51%, sys=0.29%, ctx=2099, majf=0, minf=3 00:16:55.415 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.415 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.415 issued rwts: total=640,724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.415 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.415 job38: (groupid=0, jobs=1): err= 0: pid=75068: Wed Jul 24 05:06:09 2024 00:16:55.415 read: IOPS=73, BW=9451KiB/s (9678kB/s)(80.0MiB/8668msec) 00:16:55.415 slat (usec): min=8, max=1567, avg=45.11, stdev=95.09 00:16:55.415 clat (usec): min=7139, max=52573, avg=15505.68, stdev=6946.19 00:16:55.415 lat (usec): min=7176, max=52671, avg=15550.79, stdev=6950.95 00:16:55.415 clat percentiles (usec): 00:16:55.415 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10552], 00:16:55.415 | 30.00th=[11207], 40.00th=[12387], 50.00th=[13566], 60.00th=[15008], 00:16:55.415 | 70.00th=[16909], 80.00th=[19006], 90.00th=[21627], 95.00th=[30278], 00:16:55.415 | 99.00th=[43779], 99.50th=[49021], 99.90th=[52691], 99.95th=[52691], 00:16:55.415 | 99.99th=[52691] 00:16:55.415 write: IOPS=82, BW=10.4MiB/s (10.9MB/s)(91.2MiB/8808msec); 0 zone resets 00:16:55.415 slat (usec): min=28, max=3879, avg=129.35, stdev=218.22 00:16:55.415 clat (msec): min=17, max=330, avg=95.69, stdev=45.88 00:16:55.415 lat (msec): min=17, max=330, avg=95.81, stdev=45.88 00:16:55.415 clat percentiles (msec): 00:16:55.415 | 1.00th=[ 25], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 67], 00:16:55.415 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 90], 00:16:55.415 | 70.00th=[ 101], 80.00th=[ 118], 90.00th=[ 148], 95.00th=[ 190], 00:16:55.415 | 99.00th=[ 288], 99.50th=[ 317], 99.90th=[ 330], 99.95th=[ 330], 00:16:55.415 | 99.99th=[ 330] 00:16:55.415 bw ( KiB/s): min= 1280, max=15360, per=0.86%, avg=9235.60, stdev=4406.49, samples=20 00:16:55.415 iops : min= 10, max= 120, avg=71.90, stdev=34.46, samples=20 00:16:55.415 lat (msec) : 10=6.64%, 20=33.43%, 50=7.66%, 100=35.99%, 250=15.47% 00:16:55.415 lat (msec) : 500=0.80% 00:16:55.415 cpu : usr=0.57%, sys=0.26%, ctx=2114, majf=0, minf=3 00:16:55.415 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.415 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.415 issued rwts: total=640,730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.415 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.415 job39: (groupid=0, jobs=1): err= 0: pid=75070: Wed Jul 24 05:06:09 2024 00:16:55.415 read: IOPS=72, BW=9320KiB/s (9543kB/s)(80.0MiB/8790msec) 00:16:55.415 slat (usec): min=7, max=1041, avg=51.91, stdev=104.67 00:16:55.415 clat (msec): min=7, max=130, avg=17.52, stdev=12.86 00:16:55.415 lat (msec): min=7, max=130, avg=17.57, stdev=12.86 00:16:55.415 clat percentiles (msec): 00:16:55.415 | 1.00th=[ 10], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:16:55.415 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:16:55.415 | 70.00th=[ 19], 80.00th=[ 21], 90.00th=[ 22], 95.00th=[ 32], 00:16:55.415 | 99.00th=[ 82], 99.50th=[ 105], 99.90th=[ 131], 99.95th=[ 131], 00:16:55.415 | 99.99th=[ 131] 00:16:55.415 write: IOPS=85, BW=10.7MiB/s (11.2MB/s)(92.8MiB/8646msec); 0 zone resets 00:16:55.415 slat (usec): min=31, max=7909, avg=146.52, stdev=417.20 00:16:55.415 clat (usec): min=977, max=271236, avg=92301.90, stdev=45503.74 00:16:55.415 lat (usec): min=1048, max=271295, avg=92448.42, stdev=45503.12 00:16:55.415 clat percentiles (msec): 00:16:55.415 | 1.00th=[ 3], 5.00th=[ 24], 10.00th=[ 62], 20.00th=[ 69], 00:16:55.415 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 90], 00:16:55.415 | 70.00th=[ 97], 80.00th=[ 110], 90.00th=[ 146], 95.00th=[ 194], 00:16:55.415 | 99.00th=[ 251], 99.50th=[ 257], 99.90th=[ 271], 99.95th=[ 271], 00:16:55.415 | 99.99th=[ 271] 00:16:55.415 bw ( KiB/s): min= 2043, max=23296, per=0.88%, avg=9369.90, stdev=5221.95, samples=20 00:16:55.415 iops : min= 15, max= 182, avg=72.75, stdev=40.83, samples=20 00:16:55.415 lat (usec) : 1000=0.07% 00:16:55.415 lat (msec) : 2=0.22%, 4=1.23%, 10=3.69%, 20=34.15%, 50=9.41% 00:16:55.415 lat (msec) : 100=36.47%, 250=13.97%, 500=0.80% 00:16:55.415 cpu : usr=0.49%, sys=0.33%, ctx=2129, majf=0, minf=5 00:16:55.415 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.415 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.415 issued rwts: total=640,742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.415 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.415 job40: (groupid=0, jobs=1): err= 0: pid=75071: Wed Jul 24 05:06:09 2024 00:16:55.415 read: IOPS=76, BW=9844KiB/s (10.1MB/s)(80.0MiB/8322msec) 00:16:55.415 slat (usec): min=7, max=852, avg=61.26, stdev=104.61 00:16:55.415 clat (usec): min=4106, max=50339, avg=14191.31, stdev=7325.28 00:16:55.415 lat (usec): min=4237, max=50360, avg=14252.57, stdev=7331.14 00:16:55.415 clat percentiles (usec): 00:16:55.415 | 1.00th=[ 5538], 5.00th=[ 6194], 10.00th=[ 7177], 20.00th=[ 8291], 00:16:55.415 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11994], 60.00th=[13960], 00:16:55.415 | 70.00th=[15270], 80.00th=[17957], 90.00th=[25297], 95.00th=[28705], 00:16:55.415 | 99.00th=[38536], 99.50th=[39584], 99.90th=[50594], 99.95th=[50594], 00:16:55.415 | 99.99th=[50594] 00:16:55.415 write: IOPS=74, BW=9539KiB/s (9768kB/s)(83.0MiB/8910msec); 0 zone resets 00:16:55.415 slat (usec): min=42, max=32041, avg=195.00, stdev=1286.28 00:16:55.415 clat (msec): min=32, max=562, avg=106.07, stdev=68.01 00:16:55.415 lat (msec): min=40, max=562, avg=106.26, stdev=67.97 00:16:55.415 clat percentiles (msec): 00:16:55.415 | 1.00th=[ 59], 5.00th=[ 63], 10.00th=[ 64], 20.00th=[ 68], 00:16:55.415 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 88], 60.00th=[ 96], 00:16:55.415 | 70.00th=[ 105], 80.00th=[ 123], 90.00th=[ 157], 95.00th=[ 215], 00:16:55.415 | 99.00th=[ 489], 99.50th=[ 510], 99.90th=[ 558], 99.95th=[ 558], 00:16:55.415 | 99.99th=[ 558] 00:16:55.415 bw ( KiB/s): min= 1536, max=14336, per=0.79%, avg=8398.60, stdev=4423.60, samples=20 00:16:55.415 iops : min= 12, max= 112, avg=65.45, stdev=34.65, samples=20 00:16:55.415 lat (msec) : 10=14.57%, 20=26.38%, 50=8.13%, 100=33.28%, 250=15.72% 00:16:55.415 lat (msec) : 500=1.53%, 750=0.38% 00:16:55.415 cpu : usr=0.60%, sys=0.17%, ctx=2111, majf=0, minf=3 00:16:55.415 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.415 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.415 issued rwts: total=640,664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.415 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.415 job41: (groupid=0, jobs=1): err= 0: pid=75072: Wed Jul 24 05:06:09 2024 00:16:55.416 read: IOPS=75, BW=9638KiB/s (9869kB/s)(80.0MiB/8500msec) 00:16:55.416 slat (usec): min=7, max=905, avg=46.85, stdev=87.05 00:16:55.416 clat (msec): min=6, max=105, avg=14.98, stdev=10.49 00:16:55.416 lat (msec): min=6, max=105, avg=15.03, stdev=10.49 00:16:55.416 clat percentiles (msec): 00:16:55.416 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:16:55.416 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:16:55.416 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 22], 95.00th=[ 27], 00:16:55.416 | 99.00th=[ 82], 99.50th=[ 102], 99.90th=[ 107], 99.95th=[ 107], 00:16:55.416 | 99.99th=[ 107] 00:16:55.416 write: IOPS=85, BW=10.7MiB/s (11.2MB/s)(94.9MiB/8864msec); 0 zone resets 00:16:55.416 slat (usec): min=38, max=7623, avg=151.66, stdev=333.87 00:16:55.416 clat (msec): min=13, max=353, avg=92.58, stdev=37.53 00:16:55.416 lat (msec): min=13, max=353, avg=92.74, stdev=37.51 00:16:55.416 clat percentiles (msec): 00:16:55.416 | 1.00th=[ 20], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 68], 00:16:55.416 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 90], 00:16:55.416 | 70.00th=[ 97], 80.00th=[ 111], 90.00th=[ 138], 95.00th=[ 174], 00:16:55.416 | 99.00th=[ 224], 99.50th=[ 236], 99.90th=[ 355], 99.95th=[ 355], 00:16:55.416 | 99.99th=[ 355] 00:16:55.416 bw ( KiB/s): min= 1792, max=17699, per=0.90%, avg=9599.05, stdev=4335.98, samples=20 00:16:55.416 iops : min= 14, max= 138, avg=74.75, stdev=33.86, samples=20 00:16:55.416 lat (msec) : 10=12.22%, 20=28.59%, 50=6.00%, 100=38.38%, 250=14.58% 00:16:55.416 lat (msec) : 500=0.21% 00:16:55.416 cpu : usr=0.64%, sys=0.20%, ctx=2251, majf=0, minf=1 00:16:55.416 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.416 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.416 issued rwts: total=640,759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.416 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.416 job42: (groupid=0, jobs=1): err= 0: pid=75073: Wed Jul 24 05:06:09 2024 00:16:55.416 read: IOPS=67, BW=8579KiB/s (8785kB/s)(63.8MiB/7609msec) 00:16:55.416 slat (usec): min=7, max=1676, avg=54.79, stdev=122.99 00:16:55.416 clat (usec): min=3498, max=80686, avg=14563.80, stdev=13162.32 00:16:55.416 lat (usec): min=4054, max=80984, avg=14618.59, stdev=13167.65 00:16:55.416 clat percentiles (usec): 00:16:55.416 | 1.00th=[ 4752], 5.00th=[ 5276], 10.00th=[ 5538], 20.00th=[ 8029], 00:16:55.416 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[12125], 00:16:55.416 | 70.00th=[14091], 80.00th=[16712], 90.00th=[23462], 95.00th=[40109], 00:16:55.416 | 99.00th=[74974], 99.50th=[79168], 99.90th=[80217], 99.95th=[80217], 00:16:55.416 | 99.99th=[80217] 00:16:55.416 write: IOPS=70, BW=9041KiB/s (9258kB/s)(80.0MiB/9061msec); 0 zone resets 00:16:55.416 slat (usec): min=50, max=1304, avg=131.80, stdev=142.97 00:16:55.416 clat (msec): min=56, max=309, avg=112.50, stdev=37.76 00:16:55.416 lat (msec): min=56, max=309, avg=112.63, stdev=37.75 00:16:55.416 clat percentiles (msec): 00:16:55.416 | 1.00th=[ 60], 5.00th=[ 66], 10.00th=[ 71], 20.00th=[ 81], 00:16:55.416 | 30.00th=[ 90], 40.00th=[ 97], 50.00th=[ 104], 60.00th=[ 113], 00:16:55.416 | 70.00th=[ 128], 80.00th=[ 140], 90.00th=[ 163], 95.00th=[ 184], 00:16:55.416 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 309], 99.95th=[ 309], 00:16:55.416 | 99.99th=[ 309] 00:16:55.416 bw ( KiB/s): min= 2048, max=11497, per=0.75%, avg=8013.53, stdev=2604.82, samples=19 00:16:55.416 iops : min= 16, max= 89, avg=62.47, stdev=20.23, samples=19 00:16:55.416 lat (msec) : 4=0.17%, 10=19.65%, 20=18.35%, 50=4.78%, 100=26.52% 00:16:55.416 lat (msec) : 250=30.17%, 500=0.35% 00:16:55.416 cpu : usr=0.49%, sys=0.20%, ctx=1855, majf=0, minf=5 00:16:55.416 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.416 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.416 issued rwts: total=510,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.416 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.416 job43: (groupid=0, jobs=1): err= 0: pid=75074: Wed Jul 24 05:06:09 2024 00:16:55.416 read: IOPS=74, BW=9489KiB/s (9717kB/s)(80.0MiB/8633msec) 00:16:55.416 slat (usec): min=7, max=1274, avg=57.67, stdev=126.56 00:16:55.416 clat (msec): min=4, max=129, avg=17.21, stdev=14.02 00:16:55.416 lat (msec): min=4, max=129, avg=17.27, stdev=14.02 00:16:55.416 clat percentiles (msec): 00:16:55.416 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:16:55.416 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 17], 00:16:55.416 | 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 26], 95.00th=[ 30], 00:16:55.416 | 99.00th=[ 123], 99.50th=[ 129], 99.90th=[ 130], 99.95th=[ 130], 00:16:55.416 | 99.99th=[ 130] 00:16:55.416 write: IOPS=78, BW=9.85MiB/s (10.3MB/s)(85.2MiB/8651msec); 0 zone resets 00:16:55.416 slat (usec): min=40, max=1934, avg=129.71, stdev=156.65 00:16:55.416 clat (msec): min=48, max=377, avg=100.29, stdev=48.99 00:16:55.416 lat (msec): min=48, max=378, avg=100.42, stdev=48.99 00:16:55.416 clat percentiles (msec): 00:16:55.416 | 1.00th=[ 52], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 67], 00:16:55.416 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 89], 60.00th=[ 97], 00:16:55.416 | 70.00th=[ 105], 80.00th=[ 118], 90.00th=[ 150], 95.00th=[ 186], 00:16:55.416 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 380], 99.95th=[ 380], 00:16:55.416 | 99.99th=[ 380] 00:16:55.416 bw ( KiB/s): min= 1280, max=15360, per=0.85%, avg=9086.00, stdev=4040.58, samples=19 00:16:55.416 iops : min= 10, max= 120, avg=70.79, stdev=31.57, samples=19 00:16:55.416 lat (msec) : 10=9.38%, 20=27.00%, 50=11.65%, 100=33.36%, 250=17.25% 00:16:55.416 lat (msec) : 500=1.36% 00:16:55.416 cpu : usr=0.48%, sys=0.30%, ctx=2063, majf=0, minf=1 00:16:55.416 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.416 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.416 issued rwts: total=640,682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.416 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.416 job44: (groupid=0, jobs=1): err= 0: pid=75075: Wed Jul 24 05:06:09 2024 00:16:55.416 read: IOPS=74, BW=9530KiB/s (9759kB/s)(80.0MiB/8596msec) 00:16:55.416 slat (usec): min=6, max=1955, avg=66.80, stdev=146.87 00:16:55.416 clat (usec): min=5992, max=59554, avg=15098.68, stdev=6775.42 00:16:55.416 lat (usec): min=6327, max=59572, avg=15165.48, stdev=6780.17 00:16:55.416 clat percentiles (usec): 00:16:55.416 | 1.00th=[ 6783], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[ 9896], 00:16:55.416 | 30.00th=[11994], 40.00th=[13173], 50.00th=[14091], 60.00th=[15008], 00:16:55.416 | 70.00th=[15926], 80.00th=[17171], 90.00th=[22414], 95.00th=[26346], 00:16:55.416 | 99.00th=[45876], 99.50th=[55837], 99.90th=[59507], 99.95th=[59507], 00:16:55.416 | 99.99th=[59507] 00:16:55.416 write: IOPS=85, BW=10.6MiB/s (11.2MB/s)(94.2MiB/8854msec); 0 zone resets 00:16:55.416 slat (usec): min=39, max=2912, avg=126.72, stdev=202.65 00:16:55.416 clat (msec): min=9, max=352, avg=93.09, stdev=44.19 00:16:55.416 lat (msec): min=9, max=352, avg=93.22, stdev=44.19 00:16:55.416 clat percentiles (msec): 00:16:55.416 | 1.00th=[ 14], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 67], 00:16:55.416 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 80], 60.00th=[ 87], 00:16:55.416 | 70.00th=[ 99], 80.00th=[ 115], 90.00th=[ 140], 95.00th=[ 184], 00:16:55.416 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 355], 99.95th=[ 355], 00:16:55.416 | 99.99th=[ 355] 00:16:55.416 bw ( KiB/s): min= 1792, max=17664, per=0.94%, avg=10036.84, stdev=4164.29, samples=19 00:16:55.416 iops : min= 14, max= 138, avg=78.16, stdev=32.56, samples=19 00:16:55.416 lat (msec) : 10=9.76%, 20=29.63%, 50=7.89%, 100=37.45%, 250=14.56% 00:16:55.416 lat (msec) : 500=0.72% 00:16:55.416 cpu : usr=0.61%, sys=0.22%, ctx=2205, majf=0, minf=1 00:16:55.416 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.416 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.416 issued rwts: total=640,754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.416 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.416 job45: (groupid=0, jobs=1): err= 0: pid=75076: Wed Jul 24 05:06:09 2024 00:16:55.416 read: IOPS=80, BW=10.0MiB/s (10.5MB/s)(80.0MiB/7973msec) 00:16:55.416 slat (usec): min=7, max=1020, avg=47.37, stdev=88.69 00:16:55.416 clat (msec): min=6, max=202, avg=18.11, stdev=19.41 00:16:55.416 lat (msec): min=6, max=202, avg=18.16, stdev=19.41 00:16:55.416 clat percentiles (msec): 00:16:55.416 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:16:55.416 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:16:55.416 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 26], 95.00th=[ 31], 00:16:55.416 | 99.00th=[ 109], 99.50th=[ 188], 99.90th=[ 203], 99.95th=[ 203], 00:16:55.416 | 99.99th=[ 203] 00:16:55.416 write: IOPS=83, BW=10.4MiB/s (10.9MB/s)(89.1MiB/8570msec); 0 zone resets 00:16:55.416 slat (usec): min=37, max=2554, avg=136.57, stdev=193.40 00:16:55.416 clat (msec): min=41, max=287, avg=95.28, stdev=31.51 00:16:55.417 lat (msec): min=41, max=287, avg=95.42, stdev=31.52 00:16:55.417 clat percentiles (msec): 00:16:55.417 | 1.00th=[ 59], 5.00th=[ 63], 10.00th=[ 65], 20.00th=[ 70], 00:16:55.417 | 30.00th=[ 75], 40.00th=[ 82], 50.00th=[ 89], 60.00th=[ 97], 00:16:55.417 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 136], 95.00th=[ 155], 00:16:55.417 | 99.00th=[ 215], 99.50th=[ 230], 99.90th=[ 288], 99.95th=[ 288], 00:16:55.417 | 99.99th=[ 288] 00:16:55.417 bw ( KiB/s): min= 2304, max=14592, per=0.83%, avg=8835.68, stdev=3852.78, samples=19 00:16:55.417 iops : min= 18, max= 114, avg=68.84, stdev=30.20, samples=19 00:16:55.417 lat (msec) : 10=7.69%, 20=28.82%, 50=9.90%, 100=33.92%, 250=19.51% 00:16:55.417 lat (msec) : 500=0.15% 00:16:55.417 cpu : usr=0.52%, sys=0.28%, ctx=2251, majf=0, minf=5 00:16:55.417 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.417 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.417 issued rwts: total=640,713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.417 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.417 job46: (groupid=0, jobs=1): err= 0: pid=75077: Wed Jul 24 05:06:09 2024 00:16:55.417 read: IOPS=74, BW=9495KiB/s (9723kB/s)(80.0MiB/8628msec) 00:16:55.417 slat (usec): min=7, max=1695, avg=61.73, stdev=127.38 00:16:55.417 clat (msec): min=4, max=149, avg=15.30, stdev=15.67 00:16:55.417 lat (msec): min=5, max=149, avg=15.36, stdev=15.67 00:16:55.417 clat percentiles (msec): 00:16:55.417 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:16:55.417 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 15], 00:16:55.417 | 70.00th=[ 16], 80.00th=[ 18], 90.00th=[ 21], 95.00th=[ 27], 00:16:55.417 | 99.00th=[ 142], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 150], 00:16:55.417 | 99.99th=[ 150] 00:16:55.417 write: IOPS=86, BW=10.9MiB/s (11.4MB/s)(95.8MiB/8821msec); 0 zone resets 00:16:55.417 slat (usec): min=46, max=2353, avg=130.18, stdev=165.33 00:16:55.417 clat (msec): min=20, max=273, avg=90.98, stdev=37.49 00:16:55.417 lat (msec): min=20, max=273, avg=91.11, stdev=37.50 00:16:55.417 clat percentiles (msec): 00:16:55.417 | 1.00th=[ 28], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 66], 00:16:55.417 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 86], 00:16:55.417 | 70.00th=[ 94], 80.00th=[ 109], 90.00th=[ 140], 95.00th=[ 178], 00:16:55.417 | 99.00th=[ 222], 99.50th=[ 268], 99.90th=[ 275], 99.95th=[ 275], 00:16:55.417 | 99.99th=[ 275] 00:16:55.417 bw ( KiB/s): min= 1024, max=15872, per=0.91%, avg=9709.85, stdev=4499.10, samples=20 00:16:55.417 iops : min= 8, max= 124, avg=75.80, stdev=35.18, samples=20 00:16:55.417 lat (msec) : 10=12.09%, 20=28.59%, 50=4.98%, 100=39.54%, 250=14.51% 00:16:55.417 lat (msec) : 500=0.28% 00:16:55.417 cpu : usr=0.56%, sys=0.27%, ctx=2293, majf=0, minf=3 00:16:55.417 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.417 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.417 issued rwts: total=640,766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.417 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.417 job47: (groupid=0, jobs=1): err= 0: pid=75078: Wed Jul 24 05:06:09 2024 00:16:55.417 read: IOPS=74, BW=9568KiB/s (9797kB/s)(72.4MiB/7746msec) 00:16:55.417 slat (usec): min=7, max=798, avg=46.80, stdev=92.83 00:16:55.417 clat (msec): min=4, max=139, avg=17.40, stdev=22.15 00:16:55.417 lat (msec): min=4, max=139, avg=17.45, stdev=22.16 00:16:55.417 clat percentiles (msec): 00:16:55.417 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:16:55.417 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 14], 00:16:55.417 | 70.00th=[ 16], 80.00th=[ 20], 90.00th=[ 26], 95.00th=[ 47], 00:16:55.417 | 99.00th=[ 134], 99.50th=[ 138], 99.90th=[ 140], 99.95th=[ 140], 00:16:55.417 | 99.99th=[ 140] 00:16:55.417 write: IOPS=73, BW=9384KiB/s (9609kB/s)(80.0MiB/8730msec); 0 zone resets 00:16:55.417 slat (usec): min=43, max=5515, avg=150.47, stdev=302.39 00:16:55.417 clat (msec): min=25, max=342, avg=108.28, stdev=42.42 00:16:55.417 lat (msec): min=25, max=343, avg=108.43, stdev=42.42 00:16:55.417 clat percentiles (msec): 00:16:55.417 | 1.00th=[ 39], 5.00th=[ 63], 10.00th=[ 67], 20.00th=[ 73], 00:16:55.417 | 30.00th=[ 84], 40.00th=[ 93], 50.00th=[ 102], 60.00th=[ 111], 00:16:55.417 | 70.00th=[ 122], 80.00th=[ 134], 90.00th=[ 155], 95.00th=[ 180], 00:16:55.417 | 99.00th=[ 284], 99.50th=[ 309], 99.90th=[ 342], 99.95th=[ 342], 00:16:55.417 | 99.99th=[ 342] 00:16:55.417 bw ( KiB/s): min= 1024, max=14364, per=0.77%, avg=8218.68, stdev=3592.68, samples=19 00:16:55.417 iops : min= 8, max= 112, avg=64.11, stdev=28.05, samples=19 00:16:55.417 lat (msec) : 10=18.46%, 20=20.84%, 50=6.73%, 100=25.27%, 250=27.89% 00:16:55.417 lat (msec) : 500=0.82% 00:16:55.417 cpu : usr=0.47%, sys=0.25%, ctx=1955, majf=0, minf=9 00:16:55.417 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.417 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.417 issued rwts: total=579,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.417 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.417 job48: (groupid=0, jobs=1): err= 0: pid=75079: Wed Jul 24 05:06:09 2024 00:16:55.417 read: IOPS=76, BW=9748KiB/s (9982kB/s)(80.0MiB/8404msec) 00:16:55.417 slat (usec): min=7, max=1016, avg=54.37, stdev=93.30 00:16:55.417 clat (usec): min=6213, max=58062, avg=14658.80, stdev=7734.82 00:16:55.417 lat (usec): min=6368, max=58082, avg=14713.17, stdev=7727.74 00:16:55.417 clat percentiles (usec): 00:16:55.417 | 1.00th=[ 6652], 5.00th=[ 7111], 10.00th=[ 7767], 20.00th=[ 8586], 00:16:55.417 | 30.00th=[10421], 40.00th=[11469], 50.00th=[12911], 60.00th=[14091], 00:16:55.417 | 70.00th=[15533], 80.00th=[17957], 90.00th=[23725], 95.00th=[30278], 00:16:55.417 | 99.00th=[45351], 99.50th=[49021], 99.90th=[57934], 99.95th=[57934], 00:16:55.417 | 99.99th=[57934] 00:16:55.417 write: IOPS=85, BW=10.6MiB/s (11.2MB/s)(94.5MiB/8876msec); 0 zone resets 00:16:55.417 slat (usec): min=39, max=20728, avg=159.84, stdev=776.77 00:16:55.417 clat (msec): min=40, max=335, avg=92.85, stdev=40.48 00:16:55.417 lat (msec): min=41, max=335, avg=93.01, stdev=40.45 00:16:55.417 clat percentiles (msec): 00:16:55.417 | 1.00th=[ 47], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 67], 00:16:55.417 | 30.00th=[ 71], 40.00th=[ 77], 50.00th=[ 82], 60.00th=[ 87], 00:16:55.417 | 70.00th=[ 93], 80.00th=[ 106], 90.00th=[ 136], 95.00th=[ 188], 00:16:55.417 | 99.00th=[ 255], 99.50th=[ 271], 99.90th=[ 338], 99.95th=[ 338], 00:16:55.417 | 99.99th=[ 338] 00:16:55.417 bw ( KiB/s): min= 2304, max=14592, per=0.90%, avg=9580.70, stdev=4366.00, samples=20 00:16:55.417 iops : min= 18, max= 114, avg=74.75, stdev=34.13, samples=20 00:16:55.417 lat (msec) : 10=12.82%, 20=25.36%, 50=8.02%, 100=40.76%, 250=12.25% 00:16:55.417 lat (msec) : 500=0.79% 00:16:55.417 cpu : usr=0.55%, sys=0.27%, ctx=2230, majf=0, minf=3 00:16:55.417 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.417 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.417 issued rwts: total=640,756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.417 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.417 job49: (groupid=0, jobs=1): err= 0: pid=75080: Wed Jul 24 05:06:09 2024 00:16:55.417 read: IOPS=73, BW=9466KiB/s (9693kB/s)(80.0MiB/8654msec) 00:16:55.417 slat (usec): min=7, max=1943, avg=53.87, stdev=120.79 00:16:55.417 clat (msec): min=5, max=133, avg=14.93, stdev=15.61 00:16:55.417 lat (msec): min=5, max=133, avg=14.98, stdev=15.60 00:16:55.417 clat percentiles (msec): 00:16:55.417 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:16:55.417 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:16:55.417 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 22], 95.00th=[ 29], 00:16:55.417 | 99.00th=[ 117], 99.50th=[ 127], 99.90th=[ 134], 99.95th=[ 134], 00:16:55.417 | 99.99th=[ 134] 00:16:55.417 write: IOPS=89, BW=11.2MiB/s (11.7MB/s)(99.2MiB/8869msec); 0 zone resets 00:16:55.417 slat (usec): min=43, max=14975, avg=154.85, stdev=570.22 00:16:55.417 clat (msec): min=16, max=216, avg=88.61, stdev=31.61 00:16:55.417 lat (msec): min=16, max=216, avg=88.76, stdev=31.60 00:16:55.417 clat percentiles (msec): 00:16:55.417 | 1.00th=[ 29], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 66], 00:16:55.417 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 85], 00:16:55.417 | 70.00th=[ 94], 80.00th=[ 108], 90.00th=[ 131], 95.00th=[ 159], 00:16:55.417 | 99.00th=[ 201], 99.50th=[ 207], 99.90th=[ 218], 99.95th=[ 218], 00:16:55.417 | 99.99th=[ 218] 00:16:55.417 bw ( KiB/s): min= 2048, max=16160, per=0.94%, avg=10049.70, stdev=4124.18, samples=20 00:16:55.417 iops : min= 16, max= 126, avg=78.30, stdev=32.23, samples=20 00:16:55.417 lat (msec) : 10=16.04%, 20=23.64%, 50=5.09%, 100=40.86%, 250=14.37% 00:16:55.418 cpu : usr=0.59%, sys=0.28%, ctx=2311, majf=0, minf=1 00:16:55.418 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.418 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.418 issued rwts: total=640,794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.418 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.418 job50: (groupid=0, jobs=1): err= 0: pid=75081: Wed Jul 24 05:06:09 2024 00:16:55.418 read: IOPS=72, BW=9250KiB/s (9472kB/s)(80.0MiB/8856msec) 00:16:55.418 slat (usec): min=7, max=1006, avg=48.90, stdev=87.76 00:16:55.418 clat (usec): min=4962, max=45874, avg=17007.95, stdev=7170.83 00:16:55.418 lat (usec): min=4994, max=45895, avg=17056.84, stdev=7171.06 00:16:55.418 clat percentiles (usec): 00:16:55.418 | 1.00th=[ 5407], 5.00th=[ 6849], 10.00th=[ 8455], 20.00th=[10814], 00:16:55.418 | 30.00th=[12780], 40.00th=[14353], 50.00th=[16450], 60.00th=[17957], 00:16:55.418 | 70.00th=[19530], 80.00th=[21365], 90.00th=[27395], 95.00th=[31065], 00:16:55.418 | 99.00th=[36439], 99.50th=[39584], 99.90th=[45876], 99.95th=[45876], 00:16:55.418 | 99.99th=[45876] 00:16:55.418 write: IOPS=84, BW=10.6MiB/s (11.1MB/s)(92.1MiB/8682msec); 0 zone resets 00:16:55.418 slat (usec): min=29, max=4406, avg=154.85, stdev=277.24 00:16:55.418 clat (msec): min=6, max=335, avg=93.38, stdev=46.72 00:16:55.418 lat (msec): min=6, max=335, avg=93.53, stdev=46.73 00:16:55.418 clat percentiles (msec): 00:16:55.418 | 1.00th=[ 10], 5.00th=[ 59], 10.00th=[ 62], 20.00th=[ 65], 00:16:55.418 | 30.00th=[ 69], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 89], 00:16:55.418 | 70.00th=[ 97], 80.00th=[ 113], 90.00th=[ 161], 95.00th=[ 188], 00:16:55.418 | 99.00th=[ 271], 99.50th=[ 309], 99.90th=[ 334], 99.95th=[ 334], 00:16:55.418 | 99.99th=[ 334] 00:16:55.418 bw ( KiB/s): min= 1024, max=21504, per=0.87%, avg=9323.85, stdev=5089.77, samples=20 00:16:55.418 iops : min= 8, max= 168, avg=72.55, stdev=39.83, samples=20 00:16:55.418 lat (msec) : 10=8.06%, 20=27.74%, 50=13.22%, 100=36.09%, 250=14.02% 00:16:55.418 lat (msec) : 500=0.87% 00:16:55.418 cpu : usr=0.66%, sys=0.31%, ctx=2293, majf=0, minf=5 00:16:55.418 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.418 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.418 issued rwts: total=640,737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.418 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.418 job51: (groupid=0, jobs=1): err= 0: pid=75082: Wed Jul 24 05:06:09 2024 00:16:55.418 read: IOPS=75, BW=9620KiB/s (9850kB/s)(80.0MiB/8516msec) 00:16:55.418 slat (usec): min=7, max=863, avg=55.25, stdev=113.01 00:16:55.418 clat (usec): min=6307, max=59151, avg=16998.64, stdev=8054.15 00:16:55.418 lat (usec): min=6818, max=59170, avg=17053.89, stdev=8048.29 00:16:55.418 clat percentiles (usec): 00:16:55.418 | 1.00th=[ 7177], 5.00th=[ 8225], 10.00th=[ 9110], 20.00th=[10683], 00:16:55.418 | 30.00th=[11863], 40.00th=[14353], 50.00th=[15795], 60.00th=[16909], 00:16:55.418 | 70.00th=[18220], 80.00th=[20841], 90.00th=[26346], 95.00th=[31589], 00:16:55.418 | 99.00th=[50594], 99.50th=[51119], 99.90th=[58983], 99.95th=[58983], 00:16:55.418 | 99.99th=[58983] 00:16:55.418 write: IOPS=81, BW=10.2MiB/s (10.6MB/s)(88.1MiB/8681msec); 0 zone resets 00:16:55.418 slat (usec): min=50, max=59032, avg=248.60, stdev=2232.66 00:16:55.418 clat (msec): min=2, max=364, avg=97.30, stdev=45.25 00:16:55.418 lat (msec): min=2, max=365, avg=97.55, stdev=45.22 00:16:55.418 clat percentiles (msec): 00:16:55.418 | 1.00th=[ 50], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 66], 00:16:55.418 | 30.00th=[ 71], 40.00th=[ 77], 50.00th=[ 82], 60.00th=[ 89], 00:16:55.418 | 70.00th=[ 103], 80.00th=[ 122], 90.00th=[ 159], 95.00th=[ 188], 00:16:55.418 | 99.00th=[ 275], 99.50th=[ 292], 99.90th=[ 368], 99.95th=[ 368], 00:16:55.418 | 99.99th=[ 368] 00:16:55.418 bw ( KiB/s): min= 1508, max=15616, per=0.83%, avg=8902.50, stdev=4333.75, samples=20 00:16:55.418 iops : min= 11, max= 122, avg=69.20, stdev=33.89, samples=20 00:16:55.418 lat (msec) : 4=0.15%, 10=7.36%, 20=29.89%, 50=10.33%, 100=35.17% 00:16:55.418 lat (msec) : 250=16.36%, 500=0.74% 00:16:55.418 cpu : usr=0.62%, sys=0.33%, ctx=2170, majf=0, minf=5 00:16:55.418 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.418 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.418 issued rwts: total=640,705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.418 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.418 job52: (groupid=0, jobs=1): err= 0: pid=75083: Wed Jul 24 05:06:09 2024 00:16:55.418 read: IOPS=66, BW=8466KiB/s (8669kB/s)(60.0MiB/7257msec) 00:16:55.418 slat (usec): min=7, max=1174, avg=55.11, stdev=113.16 00:16:55.418 clat (msec): min=4, max=332, avg=25.78, stdev=39.41 00:16:55.418 lat (msec): min=4, max=332, avg=25.84, stdev=39.42 00:16:55.418 clat percentiles (msec): 00:16:55.418 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:16:55.418 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 17], 00:16:55.418 | 70.00th=[ 21], 80.00th=[ 28], 90.00th=[ 50], 95.00th=[ 72], 00:16:55.418 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 334], 99.95th=[ 334], 00:16:55.418 | 99.99th=[ 334] 00:16:55.418 write: IOPS=69, BW=8960KiB/s (9175kB/s)(74.2MiB/8486msec); 0 zone resets 00:16:55.418 slat (usec): min=47, max=2346, avg=159.98, stdev=228.68 00:16:55.418 clat (msec): min=50, max=410, avg=113.57, stdev=51.66 00:16:55.418 lat (msec): min=50, max=410, avg=113.73, stdev=51.67 00:16:55.418 clat percentiles (msec): 00:16:55.418 | 1.00th=[ 62], 5.00th=[ 65], 10.00th=[ 69], 20.00th=[ 74], 00:16:55.418 | 30.00th=[ 81], 40.00th=[ 87], 50.00th=[ 97], 60.00th=[ 108], 00:16:55.418 | 70.00th=[ 124], 80.00th=[ 150], 90.00th=[ 184], 95.00th=[ 209], 00:16:55.418 | 99.00th=[ 317], 99.50th=[ 363], 99.90th=[ 409], 99.95th=[ 409], 00:16:55.418 | 99.99th=[ 409] 00:16:55.418 bw ( KiB/s): min= 2810, max=13312, per=0.71%, avg=7634.89, stdev=3366.42, samples=18 00:16:55.418 iops : min= 21, max= 104, avg=59.50, stdev=26.29, samples=18 00:16:55.418 lat (msec) : 10=7.36%, 20=24.02%, 50=8.85%, 100=32.22%, 250=25.61% 00:16:55.418 lat (msec) : 500=1.96% 00:16:55.418 cpu : usr=0.57%, sys=0.18%, ctx=1826, majf=0, minf=6 00:16:55.418 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.418 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.418 issued rwts: total=480,594,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.418 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.418 job53: (groupid=0, jobs=1): err= 0: pid=75090: Wed Jul 24 05:06:09 2024 00:16:55.418 read: IOPS=81, BW=10.2MiB/s (10.7MB/s)(80.0MiB/7876msec) 00:16:55.418 slat (usec): min=7, max=1527, avg=48.73, stdev=115.36 00:16:55.418 clat (usec): min=4952, max=84644, avg=15037.07, stdev=11019.90 00:16:55.418 lat (usec): min=4965, max=84658, avg=15085.80, stdev=11014.87 00:16:55.418 clat percentiles (usec): 00:16:55.418 | 1.00th=[ 5407], 5.00th=[ 6390], 10.00th=[ 6915], 20.00th=[ 8586], 00:16:55.418 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[11994], 60.00th=[13829], 00:16:55.418 | 70.00th=[15795], 80.00th=[19268], 90.00th=[24249], 95.00th=[34866], 00:16:55.418 | 99.00th=[80217], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:16:55.418 | 99.99th=[84411] 00:16:55.418 write: IOPS=75, BW=9612KiB/s (9843kB/s)(82.9MiB/8829msec); 0 zone resets 00:16:55.418 slat (usec): min=40, max=2977, avg=154.57, stdev=237.30 00:16:55.418 clat (msec): min=19, max=264, avg=105.78, stdev=44.05 00:16:55.418 lat (msec): min=20, max=264, avg=105.94, stdev=44.05 00:16:55.418 clat percentiles (msec): 00:16:55.418 | 1.00th=[ 26], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 67], 00:16:55.418 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 91], 60.00th=[ 110], 00:16:55.418 | 70.00th=[ 126], 80.00th=[ 150], 90.00th=[ 174], 95.00th=[ 186], 00:16:55.418 | 99.00th=[ 234], 99.50th=[ 243], 99.90th=[ 266], 99.95th=[ 266], 00:16:55.418 | 99.99th=[ 266] 00:16:55.418 bw ( KiB/s): min= 2560, max=14307, per=0.79%, avg=8390.55, stdev=3744.54, samples=20 00:16:55.418 iops : min= 20, max= 111, avg=65.35, stdev=29.22, samples=20 00:16:55.418 lat (msec) : 10=18.42%, 20=22.03%, 50=8.37%, 100=28.55%, 250=22.56% 00:16:55.418 lat (msec) : 500=0.08% 00:16:55.418 cpu : usr=0.57%, sys=0.30%, ctx=2134, majf=0, minf=3 00:16:55.418 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.418 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.418 issued rwts: total=640,663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.418 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.418 job54: (groupid=0, jobs=1): err= 0: pid=75091: Wed Jul 24 05:06:09 2024 00:16:55.418 read: IOPS=76, BW=9743KiB/s (9977kB/s)(80.0MiB/8408msec) 00:16:55.418 slat (usec): min=7, max=1560, avg=55.03, stdev=115.99 00:16:55.418 clat (msec): min=4, max=178, avg=18.00, stdev=19.65 00:16:55.418 lat (msec): min=4, max=178, avg=18.06, stdev=19.65 00:16:55.418 clat percentiles (msec): 00:16:55.418 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 9], 00:16:55.418 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 17], 00:16:55.418 | 70.00th=[ 19], 80.00th=[ 23], 90.00th=[ 29], 95.00th=[ 36], 00:16:55.418 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 180], 99.95th=[ 180], 00:16:55.418 | 99.99th=[ 180] 00:16:55.418 write: IOPS=75, BW=9698KiB/s (9931kB/s)(81.4MiB/8592msec); 0 zone resets 00:16:55.418 slat (usec): min=43, max=5123, avg=163.34, stdev=285.87 00:16:55.418 clat (msec): min=26, max=299, avg=104.66, stdev=48.03 00:16:55.418 lat (msec): min=27, max=299, avg=104.83, stdev=48.01 00:16:55.418 clat percentiles (msec): 00:16:55.418 | 1.00th=[ 42], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 65], 00:16:55.418 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 91], 60.00th=[ 101], 00:16:55.418 | 70.00th=[ 113], 80.00th=[ 144], 90.00th=[ 174], 95.00th=[ 190], 00:16:55.418 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 300], 99.95th=[ 300], 00:16:55.418 | 99.99th=[ 300] 00:16:55.418 bw ( KiB/s): min= 512, max=14592, per=0.77%, avg=8224.45, stdev=4305.72, samples=20 00:16:55.418 iops : min= 4, max= 114, avg=64.05, stdev=33.65, samples=20 00:16:55.418 lat (msec) : 10=14.33%, 20=22.46%, 50=11.77%, 100=30.29%, 250=20.06% 00:16:55.419 lat (msec) : 500=1.08% 00:16:55.419 cpu : usr=0.58%, sys=0.31%, ctx=2213, majf=0, minf=9 00:16:55.419 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.419 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.419 issued rwts: total=640,651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.419 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.419 job55: (groupid=0, jobs=1): err= 0: pid=75092: Wed Jul 24 05:06:09 2024 00:16:55.419 read: IOPS=78, BW=9.78MiB/s (10.3MB/s)(80.0MiB/8177msec) 00:16:55.419 slat (usec): min=7, max=2573, avg=55.91, stdev=144.63 00:16:55.419 clat (usec): min=3450, max=67914, avg=13890.89, stdev=8701.12 00:16:55.419 lat (usec): min=3772, max=67929, avg=13946.81, stdev=8694.27 00:16:55.419 clat percentiles (usec): 00:16:55.419 | 1.00th=[ 5604], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7832], 00:16:55.419 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[11076], 60.00th=[12125], 00:16:55.419 | 70.00th=[14353], 80.00th=[19006], 90.00th=[23725], 95.00th=[29492], 00:16:55.419 | 99.00th=[51119], 99.50th=[59507], 99.90th=[67634], 99.95th=[67634], 00:16:55.419 | 99.99th=[67634] 00:16:55.419 write: IOPS=72, BW=9313KiB/s (9537kB/s)(81.0MiB/8906msec); 0 zone resets 00:16:55.419 slat (usec): min=40, max=3300, avg=166.16, stdev=260.43 00:16:55.419 clat (msec): min=57, max=433, avg=109.25, stdev=54.45 00:16:55.419 lat (msec): min=57, max=433, avg=109.42, stdev=54.47 00:16:55.419 clat percentiles (msec): 00:16:55.419 | 1.00th=[ 61], 5.00th=[ 63], 10.00th=[ 64], 20.00th=[ 68], 00:16:55.419 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 101], 00:16:55.419 | 70.00th=[ 123], 80.00th=[ 150], 90.00th=[ 182], 95.00th=[ 197], 00:16:55.419 | 99.00th=[ 330], 99.50th=[ 372], 99.90th=[ 435], 99.95th=[ 435], 00:16:55.419 | 99.99th=[ 435] 00:16:55.419 bw ( KiB/s): min= 1024, max=15104, per=0.78%, avg=8311.05, stdev=3706.97, samples=19 00:16:55.419 iops : min= 8, max= 118, avg=64.79, stdev=29.04, samples=19 00:16:55.419 lat (msec) : 4=0.16%, 10=18.32%, 20=22.75%, 50=7.76%, 100=30.90% 00:16:55.419 lat (msec) : 250=18.94%, 500=1.16% 00:16:55.419 cpu : usr=0.68%, sys=0.20%, ctx=2132, majf=0, minf=1 00:16:55.419 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.419 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.419 issued rwts: total=640,648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.419 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.419 job56: (groupid=0, jobs=1): err= 0: pid=75093: Wed Jul 24 05:06:09 2024 00:16:55.419 read: IOPS=74, BW=9473KiB/s (9700kB/s)(80.0MiB/8648msec) 00:16:55.419 slat (usec): min=7, max=1067, avg=49.98, stdev=106.82 00:16:55.419 clat (usec): min=9274, max=43332, avg=17474.41, stdev=5177.89 00:16:55.419 lat (usec): min=9301, max=43347, avg=17524.39, stdev=5180.73 00:16:55.419 clat percentiles (usec): 00:16:55.419 | 1.00th=[10945], 5.00th=[11731], 10.00th=[12125], 20.00th=[13435], 00:16:55.419 | 30.00th=[14615], 40.00th=[15664], 50.00th=[16450], 60.00th=[17171], 00:16:55.419 | 70.00th=[18220], 80.00th=[20579], 90.00th=[23987], 95.00th=[28181], 00:16:55.419 | 99.00th=[34341], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:16:55.419 | 99.99th=[43254] 00:16:55.419 write: IOPS=86, BW=10.8MiB/s (11.3MB/s)(93.1MiB/8631msec); 0 zone resets 00:16:55.419 slat (usec): min=48, max=1292, avg=122.95, stdev=125.63 00:16:55.419 clat (msec): min=41, max=448, avg=89.89, stdev=47.61 00:16:55.419 lat (msec): min=41, max=448, avg=90.01, stdev=47.61 00:16:55.419 clat percentiles (msec): 00:16:55.419 | 1.00th=[ 50], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 64], 00:16:55.419 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 82], 00:16:55.419 | 70.00th=[ 90], 80.00th=[ 100], 90.00th=[ 138], 95.00th=[ 174], 00:16:55.419 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 447], 99.95th=[ 447], 00:16:55.419 | 99.99th=[ 447] 00:16:55.419 bw ( KiB/s): min= 512, max=15616, per=0.93%, avg=9941.21, stdev=4798.71, samples=19 00:16:55.419 iops : min= 4, max= 122, avg=77.53, stdev=37.42, samples=19 00:16:55.419 lat (msec) : 10=0.29%, 20=35.74%, 50=10.76%, 100=42.89%, 250=9.03% 00:16:55.419 lat (msec) : 500=1.30% 00:16:55.419 cpu : usr=0.69%, sys=0.26%, ctx=2221, majf=0, minf=5 00:16:55.419 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.419 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.419 issued rwts: total=640,745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.419 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.419 job57: (groupid=0, jobs=1): err= 0: pid=75094: Wed Jul 24 05:06:09 2024 00:16:55.419 read: IOPS=73, BW=9391KiB/s (9617kB/s)(80.0MiB/8723msec) 00:16:55.419 slat (usec): min=7, max=1169, avg=54.77, stdev=109.91 00:16:55.419 clat (usec): min=6452, max=62346, avg=19725.19, stdev=7591.94 00:16:55.419 lat (usec): min=6477, max=62356, avg=19779.96, stdev=7585.08 00:16:55.419 clat percentiles (usec): 00:16:55.419 | 1.00th=[ 9372], 5.00th=[10683], 10.00th=[11994], 20.00th=[14222], 00:16:55.419 | 30.00th=[15401], 40.00th=[16319], 50.00th=[17695], 60.00th=[19006], 00:16:55.419 | 70.00th=[21627], 80.00th=[25560], 90.00th=[30016], 95.00th=[33817], 00:16:55.419 | 99.00th=[46400], 99.50th=[51119], 99.90th=[62129], 99.95th=[62129], 00:16:55.419 | 99.99th=[62129] 00:16:55.419 write: IOPS=86, BW=10.8MiB/s (11.3MB/s)(91.5MiB/8471msec); 0 zone resets 00:16:55.419 slat (usec): min=35, max=15165, avg=160.81, stdev=577.09 00:16:55.419 clat (msec): min=8, max=395, avg=91.60, stdev=50.18 00:16:55.419 lat (msec): min=8, max=395, avg=91.76, stdev=50.14 00:16:55.419 clat percentiles (msec): 00:16:55.419 | 1.00th=[ 18], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 65], 00:16:55.419 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 82], 00:16:55.419 | 70.00th=[ 88], 80.00th=[ 100], 90.00th=[ 142], 95.00th=[ 199], 00:16:55.419 | 99.00th=[ 305], 99.50th=[ 347], 99.90th=[ 397], 99.95th=[ 397], 00:16:55.419 | 99.99th=[ 397] 00:16:55.419 bw ( KiB/s): min= 1024, max=17152, per=0.87%, avg=9261.50, stdev=5095.43, samples=20 00:16:55.419 iops : min= 8, max= 134, avg=72.25, stdev=39.85, samples=20 00:16:55.419 lat (msec) : 10=1.38%, 20=29.30%, 50=16.62%, 100=42.78%, 250=8.09% 00:16:55.419 lat (msec) : 500=1.82% 00:16:55.419 cpu : usr=0.65%, sys=0.33%, ctx=2176, majf=0, minf=1 00:16:55.419 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.419 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.419 issued rwts: total=640,732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.419 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.419 job58: (groupid=0, jobs=1): err= 0: pid=75095: Wed Jul 24 05:06:09 2024 00:16:55.419 read: IOPS=76, BW=9786KiB/s (10.0MB/s)(80.0MiB/8371msec) 00:16:55.419 slat (usec): min=7, max=784, avg=55.44, stdev=91.69 00:16:55.419 clat (usec): min=7544, max=54275, avg=18279.57, stdev=7330.98 00:16:55.419 lat (usec): min=7565, max=54282, avg=18335.00, stdev=7327.23 00:16:55.419 clat percentiles (usec): 00:16:55.419 | 1.00th=[ 7832], 5.00th=[ 9241], 10.00th=[10814], 20.00th=[12256], 00:16:55.419 | 30.00th=[14091], 40.00th=[15533], 50.00th=[16909], 60.00th=[17957], 00:16:55.419 | 70.00th=[20055], 80.00th=[23200], 90.00th=[28443], 95.00th=[33817], 00:16:55.419 | 99.00th=[41681], 99.50th=[47449], 99.90th=[54264], 99.95th=[54264], 00:16:55.419 | 99.99th=[54264] 00:16:55.419 write: IOPS=85, BW=10.7MiB/s (11.2MB/s)(91.9MiB/8568msec); 0 zone resets 00:16:55.419 slat (usec): min=45, max=2639, avg=138.68, stdev=184.81 00:16:55.419 clat (msec): min=50, max=314, avg=92.24, stdev=43.43 00:16:55.419 lat (msec): min=50, max=314, avg=92.38, stdev=43.44 00:16:55.419 clat percentiles (msec): 00:16:55.419 | 1.00th=[ 59], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 65], 00:16:55.419 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:16:55.419 | 70.00th=[ 94], 80.00th=[ 107], 90.00th=[ 148], 95.00th=[ 184], 00:16:55.419 | 99.00th=[ 284], 99.50th=[ 300], 99.90th=[ 317], 99.95th=[ 317], 00:16:55.419 | 99.99th=[ 317] 00:16:55.419 bw ( KiB/s): min= 1024, max=15104, per=0.87%, avg=9319.25, stdev=4650.97, samples=20 00:16:55.419 iops : min= 8, max= 118, avg=72.75, stdev=36.38, samples=20 00:16:55.419 lat (msec) : 10=3.78%, 20=28.95%, 50=13.60%, 100=41.02%, 250=11.56% 00:16:55.419 lat (msec) : 500=1.09% 00:16:55.419 cpu : usr=0.74%, sys=0.19%, ctx=2279, majf=0, minf=7 00:16:55.419 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.419 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.419 issued rwts: total=640,735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.419 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.419 job59: (groupid=0, jobs=1): err= 0: pid=75096: Wed Jul 24 05:06:09 2024 00:16:55.419 read: IOPS=72, BW=9249KiB/s (9471kB/s)(80.0MiB/8857msec) 00:16:55.419 slat (usec): min=7, max=1648, avg=58.81, stdev=122.81 00:16:55.419 clat (usec): min=5867, max=93394, avg=17068.84, stdev=11967.38 00:16:55.419 lat (usec): min=5899, max=93405, avg=17127.65, stdev=11971.25 00:16:55.419 clat percentiles (usec): 00:16:55.419 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 8356], 20.00th=[10290], 00:16:55.419 | 30.00th=[11207], 40.00th=[12125], 50.00th=[13173], 60.00th=[15270], 00:16:55.419 | 70.00th=[17171], 80.00th=[20841], 90.00th=[29492], 95.00th=[37487], 00:16:55.419 | 99.00th=[77071], 99.50th=[82314], 99.90th=[93848], 99.95th=[93848], 00:16:55.419 | 99.99th=[93848] 00:16:55.419 write: IOPS=85, BW=10.7MiB/s (11.2MB/s)(92.9MiB/8678msec); 0 zone resets 00:16:55.419 slat (usec): min=51, max=26858, avg=230.20, stdev=1102.03 00:16:55.419 clat (usec): min=1688, max=359216, avg=92650.81, stdev=45899.82 00:16:55.419 lat (usec): min=1810, max=359280, avg=92881.01, stdev=45830.33 00:16:55.419 clat percentiles (msec): 00:16:55.419 | 1.00th=[ 6], 5.00th=[ 59], 10.00th=[ 62], 20.00th=[ 65], 00:16:55.419 | 30.00th=[ 68], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 86], 00:16:55.419 | 70.00th=[ 97], 80.00th=[ 113], 90.00th=[ 150], 95.00th=[ 184], 00:16:55.419 | 99.00th=[ 271], 99.50th=[ 305], 99.90th=[ 359], 99.95th=[ 359], 00:16:55.419 | 99.99th=[ 359] 00:16:55.419 bw ( KiB/s): min= 1017, max=20224, per=0.88%, avg=9378.20, stdev=4914.73, samples=20 00:16:55.420 iops : min= 7, max= 158, avg=72.80, stdev=38.45, samples=20 00:16:55.420 lat (msec) : 2=0.22%, 4=0.14%, 10=8.03%, 20=29.57%, 50=9.83% 00:16:55.420 lat (msec) : 100=37.31%, 250=14.03%, 500=0.87% 00:16:55.420 cpu : usr=0.69%, sys=0.28%, ctx=2240, majf=0, minf=3 00:16:55.420 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.420 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.420 issued rwts: total=640,743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.420 job60: (groupid=0, jobs=1): err= 0: pid=75097: Wed Jul 24 05:06:09 2024 00:16:55.420 read: IOPS=107, BW=13.5MiB/s (14.1MB/s)(120MiB/8903msec) 00:16:55.420 slat (usec): min=7, max=1423, avg=43.68, stdev=94.64 00:16:55.420 clat (usec): min=3351, max=34899, avg=10589.52, stdev=5081.33 00:16:55.420 lat (usec): min=3459, max=34909, avg=10633.21, stdev=5081.27 00:16:55.420 clat percentiles (usec): 00:16:55.420 | 1.00th=[ 3687], 5.00th=[ 4178], 10.00th=[ 4817], 20.00th=[ 5997], 00:16:55.420 | 30.00th=[ 7308], 40.00th=[ 8717], 50.00th=[ 9896], 60.00th=[10683], 00:16:55.420 | 70.00th=[12256], 80.00th=[14353], 90.00th=[17171], 95.00th=[20317], 00:16:55.420 | 99.00th=[26346], 99.50th=[27919], 99.90th=[34866], 99.95th=[34866], 00:16:55.420 | 99.99th=[34866] 00:16:55.420 write: IOPS=125, BW=15.7MiB/s (16.5MB/s)(138MiB/8758msec); 0 zone resets 00:16:55.420 slat (usec): min=40, max=4186, avg=119.60, stdev=208.56 00:16:55.420 clat (msec): min=16, max=234, avg=63.02, stdev=27.95 00:16:55.420 lat (msec): min=16, max=234, avg=63.14, stdev=27.95 00:16:55.420 clat percentiles (msec): 00:16:55.420 | 1.00th=[ 29], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 46], 00:16:55.420 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 55], 60.00th=[ 59], 00:16:55.420 | 70.00th=[ 63], 80.00th=[ 73], 90.00th=[ 90], 95.00th=[ 120], 00:16:55.420 | 99.00th=[ 184], 99.50th=[ 211], 99.90th=[ 230], 99.95th=[ 234], 00:16:55.420 | 99.99th=[ 234] 00:16:55.420 bw ( KiB/s): min= 2048, max=21504, per=1.31%, avg=13975.40, stdev=6541.95, samples=20 00:16:55.420 iops : min= 16, max= 168, avg=109.10, stdev=51.16, samples=20 00:16:55.420 lat (msec) : 4=1.50%, 10=22.82%, 20=19.90%, 50=19.71%, 100=31.70% 00:16:55.420 lat (msec) : 250=4.37% 00:16:55.420 cpu : usr=0.84%, sys=0.34%, ctx=3135, majf=0, minf=3 00:16:55.420 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.420 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.420 issued rwts: total=960,1100,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.420 job61: (groupid=0, jobs=1): err= 0: pid=75098: Wed Jul 24 05:06:09 2024 00:16:55.420 read: IOPS=98, BW=12.3MiB/s (12.9MB/s)(106MiB/8604msec) 00:16:55.420 slat (usec): min=7, max=1571, avg=45.11, stdev=120.26 00:16:55.420 clat (msec): min=3, max=142, avg=14.08, stdev=16.79 00:16:55.420 lat (msec): min=3, max=142, avg=14.12, stdev=16.79 00:16:55.420 clat percentiles (msec): 00:16:55.420 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 8], 00:16:55.420 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11], 00:16:55.420 | 70.00th=[ 13], 80.00th=[ 15], 90.00th=[ 25], 95.00th=[ 36], 00:16:55.420 | 99.00th=[ 78], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:16:55.420 | 99.99th=[ 144] 00:16:55.420 write: IOPS=112, BW=14.1MiB/s (14.8MB/s)(120MiB/8501msec); 0 zone resets 00:16:55.420 slat (usec): min=40, max=10159, avg=144.63, stdev=439.31 00:16:55.420 clat (msec): min=18, max=190, avg=70.15, stdev=28.64 00:16:55.420 lat (msec): min=19, max=190, avg=70.29, stdev=28.62 00:16:55.420 clat percentiles (msec): 00:16:55.420 | 1.00th=[ 38], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 47], 00:16:55.420 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 68], 00:16:55.420 | 70.00th=[ 77], 80.00th=[ 90], 90.00th=[ 109], 95.00th=[ 130], 00:16:55.420 | 99.00th=[ 174], 99.50th=[ 176], 99.90th=[ 190], 99.95th=[ 190], 00:16:55.420 | 99.99th=[ 190] 00:16:55.420 bw ( KiB/s): min= 2816, max=21248, per=1.16%, avg=12385.63, stdev=5179.93, samples=19 00:16:55.420 iops : min= 22, max= 166, avg=96.63, stdev=40.47, samples=19 00:16:55.420 lat (msec) : 4=0.55%, 10=25.80%, 20=14.95%, 50=17.44%, 100=32.83% 00:16:55.420 lat (msec) : 250=8.42% 00:16:55.420 cpu : usr=0.72%, sys=0.34%, ctx=2885, majf=0, minf=3 00:16:55.420 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.420 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.420 issued rwts: total=846,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.420 job62: (groupid=0, jobs=1): err= 0: pid=75099: Wed Jul 24 05:06:09 2024 00:16:55.420 read: IOPS=105, BW=13.2MiB/s (13.9MB/s)(120MiB/9077msec) 00:16:55.420 slat (usec): min=7, max=1529, avg=50.77, stdev=115.82 00:16:55.420 clat (msec): min=3, max=164, avg=12.81, stdev=17.50 00:16:55.420 lat (msec): min=3, max=164, avg=12.86, stdev=17.50 00:16:55.420 clat percentiles (msec): 00:16:55.420 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:16:55.420 | 30.00th=[ 7], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11], 00:16:55.420 | 70.00th=[ 12], 80.00th=[ 15], 90.00th=[ 18], 95.00th=[ 23], 00:16:55.420 | 99.00th=[ 95], 99.50th=[ 161], 99.90th=[ 165], 99.95th=[ 165], 00:16:55.420 | 99.99th=[ 165] 00:16:55.420 write: IOPS=128, BW=16.0MiB/s (16.8MB/s)(136MiB/8491msec); 0 zone resets 00:16:55.420 slat (usec): min=40, max=4395, avg=133.31, stdev=215.36 00:16:55.420 clat (msec): min=2, max=176, avg=61.74, stdev=22.18 00:16:55.420 lat (msec): min=3, max=176, avg=61.87, stdev=22.18 00:16:55.420 clat percentiles (msec): 00:16:55.420 | 1.00th=[ 18], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 47], 00:16:55.420 | 30.00th=[ 49], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 61], 00:16:55.420 | 70.00th=[ 65], 80.00th=[ 74], 90.00th=[ 90], 95.00th=[ 108], 00:16:55.420 | 99.00th=[ 142], 99.50th=[ 163], 99.90th=[ 169], 99.95th=[ 178], 00:16:55.420 | 99.99th=[ 178] 00:16:55.420 bw ( KiB/s): min= 1280, max=23808, per=1.30%, avg=13842.60, stdev=6420.73, samples=20 00:16:55.420 iops : min= 10, max= 186, avg=108.05, stdev=50.17, samples=20 00:16:55.420 lat (msec) : 4=0.49%, 10=25.80%, 20=18.44%, 50=19.27%, 100=31.56% 00:16:55.420 lat (msec) : 250=4.44% 00:16:55.420 cpu : usr=0.92%, sys=0.28%, ctx=3240, majf=0, minf=3 00:16:55.420 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.420 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.420 issued rwts: total=960,1090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.420 job63: (groupid=0, jobs=1): err= 0: pid=75100: Wed Jul 24 05:06:09 2024 00:16:55.420 read: IOPS=107, BW=13.5MiB/s (14.1MB/s)(120MiB/8921msec) 00:16:55.420 slat (usec): min=7, max=1469, avg=49.76, stdev=115.22 00:16:55.420 clat (usec): min=4125, max=97238, avg=10697.66, stdev=8477.42 00:16:55.420 lat (usec): min=4143, max=97248, avg=10747.41, stdev=8476.66 00:16:55.420 clat percentiles (usec): 00:16:55.420 | 1.00th=[ 4293], 5.00th=[ 4752], 10.00th=[ 5276], 20.00th=[ 6325], 00:16:55.420 | 30.00th=[ 7504], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9503], 00:16:55.420 | 70.00th=[10683], 80.00th=[12387], 90.00th=[16188], 95.00th=[21627], 00:16:55.420 | 99.00th=[46400], 99.50th=[74974], 99.90th=[96994], 99.95th=[96994], 00:16:55.420 | 99.99th=[96994] 00:16:55.420 write: IOPS=120, BW=15.1MiB/s (15.8MB/s)(132MiB/8753msec); 0 zone resets 00:16:55.420 slat (usec): min=42, max=56667, avg=179.41, stdev=1765.37 00:16:55.420 clat (usec): min=1608, max=280483, avg=65585.32, stdev=28583.70 00:16:55.420 lat (usec): min=1671, max=280554, avg=65764.72, stdev=28579.06 00:16:55.420 clat percentiles (msec): 00:16:55.420 | 1.00th=[ 39], 5.00th=[ 43], 10.00th=[ 45], 20.00th=[ 47], 00:16:55.420 | 30.00th=[ 51], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 62], 00:16:55.420 | 70.00th=[ 68], 80.00th=[ 79], 90.00th=[ 99], 95.00th=[ 112], 00:16:55.420 | 99.00th=[ 188], 99.50th=[ 230], 99.90th=[ 279], 99.95th=[ 279], 00:16:55.420 | 99.99th=[ 279] 00:16:55.420 bw ( KiB/s): min= 2048, max=21290, per=1.25%, avg=13360.50, stdev=5414.44, samples=20 00:16:55.420 iops : min= 16, max= 166, avg=104.05, stdev=42.34, samples=20 00:16:55.420 lat (msec) : 2=0.10%, 4=0.15%, 10=31.71%, 20=13.20%, 50=17.12% 00:16:55.420 lat (msec) : 100=33.25%, 250=4.32%, 500=0.15% 00:16:55.420 cpu : usr=0.84%, sys=0.31%, ctx=3044, majf=0, minf=3 00:16:55.420 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.420 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.420 issued rwts: total=960,1055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.420 job64: (groupid=0, jobs=1): err= 0: pid=75101: Wed Jul 24 05:06:09 2024 00:16:55.420 read: IOPS=95, BW=11.9MiB/s (12.5MB/s)(102MiB/8579msec) 00:16:55.420 slat (usec): min=7, max=2515, avg=45.43, stdev=119.73 00:16:55.420 clat (msec): min=2, max=151, avg=13.89, stdev=19.02 00:16:55.420 lat (msec): min=2, max=151, avg=13.94, stdev=19.02 00:16:55.420 clat percentiles (msec): 00:16:55.420 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 6], 20.00th=[ 7], 00:16:55.420 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11], 00:16:55.420 | 70.00th=[ 13], 80.00th=[ 16], 90.00th=[ 21], 95.00th=[ 27], 00:16:55.420 | 99.00th=[ 134], 99.50th=[ 148], 99.90th=[ 153], 99.95th=[ 153], 00:16:55.420 | 99.99th=[ 153] 00:16:55.420 write: IOPS=112, BW=14.0MiB/s (14.7MB/s)(120MiB/8570msec); 0 zone resets 00:16:55.420 slat (usec): min=41, max=6681, avg=133.73, stdev=333.13 00:16:55.420 clat (msec): min=39, max=205, avg=70.88, stdev=29.39 00:16:55.420 lat (msec): min=39, max=205, avg=71.02, stdev=29.38 00:16:55.420 clat percentiles (msec): 00:16:55.420 | 1.00th=[ 42], 5.00th=[ 44], 10.00th=[ 46], 20.00th=[ 48], 00:16:55.420 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 66], 00:16:55.420 | 70.00th=[ 75], 80.00th=[ 89], 90.00th=[ 117], 95.00th=[ 132], 00:16:55.420 | 99.00th=[ 169], 99.50th=[ 188], 99.90th=[ 207], 99.95th=[ 207], 00:16:55.420 | 99.99th=[ 207] 00:16:55.421 bw ( KiB/s): min= 5376, max=18688, per=1.13%, avg=12068.47, stdev=4978.84, samples=19 00:16:55.421 iops : min= 42, max= 146, avg=94.16, stdev=38.75, samples=19 00:16:55.421 lat (msec) : 4=2.36%, 10=22.69%, 20=16.05%, 50=16.05%, 100=33.84% 00:16:55.421 lat (msec) : 250=9.01% 00:16:55.421 cpu : usr=0.68%, sys=0.36%, ctx=2673, majf=0, minf=5 00:16:55.421 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.421 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.421 issued rwts: total=816,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.421 job65: (groupid=0, jobs=1): err= 0: pid=75102: Wed Jul 24 05:06:09 2024 00:16:55.421 read: IOPS=110, BW=13.8MiB/s (14.4MB/s)(120MiB/8719msec) 00:16:55.421 slat (usec): min=7, max=2518, avg=51.71, stdev=122.66 00:16:55.421 clat (usec): min=2468, max=71558, avg=13253.62, stdev=9028.41 00:16:55.421 lat (usec): min=2484, max=71573, avg=13305.33, stdev=9036.67 00:16:55.421 clat percentiles (usec): 00:16:55.421 | 1.00th=[ 3916], 5.00th=[ 6063], 10.00th=[ 6849], 20.00th=[ 7439], 00:16:55.421 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[10552], 60.00th=[12387], 00:16:55.421 | 70.00th=[14091], 80.00th=[16909], 90.00th=[21365], 95.00th=[26346], 00:16:55.421 | 99.00th=[55837], 99.50th=[62653], 99.90th=[71828], 99.95th=[71828], 00:16:55.421 | 99.99th=[71828] 00:16:55.421 write: IOPS=116, BW=14.6MiB/s (15.3MB/s)(123MiB/8434msec); 0 zone resets 00:16:55.421 slat (usec): min=48, max=14735, avg=140.62, stdev=501.80 00:16:55.421 clat (msec): min=25, max=366, avg=67.88, stdev=35.24 00:16:55.421 lat (msec): min=26, max=366, avg=68.02, stdev=35.22 00:16:55.421 clat percentiles (msec): 00:16:55.421 | 1.00th=[ 39], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 47], 00:16:55.421 | 30.00th=[ 51], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 63], 00:16:55.421 | 70.00th=[ 70], 80.00th=[ 82], 90.00th=[ 103], 95.00th=[ 128], 00:16:55.421 | 99.00th=[ 234], 99.50th=[ 275], 99.90th=[ 368], 99.95th=[ 368], 00:16:55.421 | 99.99th=[ 368] 00:16:55.421 bw ( KiB/s): min= 1024, max=22016, per=1.17%, avg=12472.75, stdev=5775.78, samples=20 00:16:55.421 iops : min= 8, max= 172, avg=97.35, stdev=45.10, samples=20 00:16:55.421 lat (msec) : 4=0.62%, 10=21.73%, 20=21.11%, 50=20.03%, 100=30.95% 00:16:55.421 lat (msec) : 250=5.25%, 500=0.31% 00:16:55.421 cpu : usr=0.84%, sys=0.31%, ctx=3083, majf=0, minf=3 00:16:55.421 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.421 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.421 issued rwts: total=960,982,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.421 job66: (groupid=0, jobs=1): err= 0: pid=75103: Wed Jul 24 05:06:09 2024 00:16:55.421 read: IOPS=104, BW=13.1MiB/s (13.8MB/s)(120MiB/9149msec) 00:16:55.421 slat (usec): min=7, max=1068, avg=41.40, stdev=83.68 00:16:55.421 clat (msec): min=3, max=130, avg=12.71, stdev=12.74 00:16:55.421 lat (msec): min=3, max=130, avg=12.75, stdev=12.73 00:16:55.421 clat percentiles (msec): 00:16:55.421 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:16:55.421 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 12], 00:16:55.421 | 70.00th=[ 13], 80.00th=[ 16], 90.00th=[ 22], 95.00th=[ 30], 00:16:55.421 | 99.00th=[ 47], 99.50th=[ 128], 99.90th=[ 131], 99.95th=[ 131], 00:16:55.421 | 99.99th=[ 131] 00:16:55.421 write: IOPS=129, BW=16.2MiB/s (17.0MB/s)(138MiB/8503msec); 0 zone resets 00:16:55.421 slat (usec): min=41, max=1735, avg=120.65, stdev=165.62 00:16:55.421 clat (msec): min=13, max=216, avg=61.21, stdev=24.31 00:16:55.421 lat (msec): min=13, max=216, avg=61.33, stdev=24.31 00:16:55.421 clat percentiles (msec): 00:16:55.421 | 1.00th=[ 37], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 46], 00:16:55.421 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 56], 60.00th=[ 59], 00:16:55.421 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 84], 95.00th=[ 108], 00:16:55.421 | 99.00th=[ 174], 99.50th=[ 199], 99.90th=[ 205], 99.95th=[ 218], 00:16:55.421 | 99.99th=[ 218] 00:16:55.421 bw ( KiB/s): min= 2816, max=23855, per=1.31%, avg=14002.85, stdev=6322.19, samples=20 00:16:55.421 iops : min= 22, max= 186, avg=109.25, stdev=49.48, samples=20 00:16:55.421 lat (msec) : 4=0.58%, 10=24.11%, 20=16.50%, 50=21.93%, 100=33.58% 00:16:55.421 lat (msec) : 250=3.30% 00:16:55.421 cpu : usr=0.81%, sys=0.38%, ctx=3191, majf=0, minf=1 00:16:55.421 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.421 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.421 issued rwts: total=960,1101,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.421 job67: (groupid=0, jobs=1): err= 0: pid=75104: Wed Jul 24 05:06:09 2024 00:16:55.421 read: IOPS=109, BW=13.7MiB/s (14.4MB/s)(116MiB/8403msec) 00:16:55.421 slat (usec): min=7, max=876, avg=38.66, stdev=70.15 00:16:55.421 clat (msec): min=3, max=126, avg=11.28, stdev=11.74 00:16:55.421 lat (msec): min=3, max=126, avg=11.32, stdev=11.74 00:16:55.421 clat percentiles (msec): 00:16:55.421 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:16:55.421 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 11], 00:16:55.421 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 18], 95.00th=[ 22], 00:16:55.421 | 99.00th=[ 52], 99.50th=[ 123], 99.90th=[ 127], 99.95th=[ 127], 00:16:55.421 | 99.99th=[ 127] 00:16:55.421 write: IOPS=110, BW=13.8MiB/s (14.5MB/s)(120MiB/8683msec); 0 zone resets 00:16:55.421 slat (usec): min=46, max=2120, avg=131.30, stdev=175.85 00:16:55.421 clat (msec): min=38, max=197, avg=71.81, stdev=26.53 00:16:55.421 lat (msec): min=38, max=197, avg=71.95, stdev=26.55 00:16:55.421 clat percentiles (msec): 00:16:55.421 | 1.00th=[ 42], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 51], 00:16:55.421 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 70], 00:16:55.421 | 70.00th=[ 79], 80.00th=[ 88], 90.00th=[ 111], 95.00th=[ 128], 00:16:55.421 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 197], 99.95th=[ 197], 00:16:55.421 | 99.99th=[ 197] 00:16:55.421 bw ( KiB/s): min= 6131, max=18688, per=1.15%, avg=12259.53, stdev=3801.16, samples=19 00:16:55.421 iops : min= 47, max= 146, avg=95.63, stdev=29.90, samples=19 00:16:55.421 lat (msec) : 4=1.38%, 10=28.13%, 20=15.92%, 50=11.78%, 100=35.40% 00:16:55.421 lat (msec) : 250=7.38% 00:16:55.421 cpu : usr=0.70%, sys=0.40%, ctx=2958, majf=0, minf=3 00:16:55.421 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.421 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.421 issued rwts: total=924,960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.421 job68: (groupid=0, jobs=1): err= 0: pid=75105: Wed Jul 24 05:06:09 2024 00:16:55.421 read: IOPS=110, BW=13.8MiB/s (14.4MB/s)(120MiB/8721msec) 00:16:55.421 slat (usec): min=7, max=757, avg=39.21, stdev=69.80 00:16:55.421 clat (usec): min=3178, max=49176, avg=9379.92, stdev=5285.14 00:16:55.421 lat (usec): min=3195, max=49192, avg=9419.13, stdev=5286.40 00:16:55.421 clat percentiles (usec): 00:16:55.421 | 1.00th=[ 4113], 5.00th=[ 4817], 10.00th=[ 5342], 20.00th=[ 6063], 00:16:55.421 | 30.00th=[ 6915], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 9110], 00:16:55.421 | 70.00th=[ 9634], 80.00th=[10814], 90.00th=[14091], 95.00th=[17957], 00:16:55.421 | 99.00th=[35390], 99.50th=[41157], 99.90th=[49021], 99.95th=[49021], 00:16:55.421 | 99.99th=[49021] 00:16:55.421 write: IOPS=119, BW=14.9MiB/s (15.6MB/s)(133MiB/8902msec); 0 zone resets 00:16:55.421 slat (usec): min=37, max=13867, avg=137.88, stdev=451.25 00:16:55.421 clat (msec): min=20, max=234, avg=66.35, stdev=27.48 00:16:55.421 lat (msec): min=21, max=235, avg=66.48, stdev=27.49 00:16:55.421 clat percentiles (msec): 00:16:55.421 | 1.00th=[ 39], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 46], 00:16:55.421 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 63], 00:16:55.421 | 70.00th=[ 70], 80.00th=[ 84], 90.00th=[ 105], 95.00th=[ 118], 00:16:55.421 | 99.00th=[ 167], 99.50th=[ 194], 99.90th=[ 222], 99.95th=[ 234], 00:16:55.421 | 99.99th=[ 234] 00:16:55.421 bw ( KiB/s): min= 5120, max=20736, per=1.26%, avg=13486.00, stdev=4548.13, samples=20 00:16:55.421 iops : min= 40, max= 162, avg=105.25, stdev=35.55, samples=20 00:16:55.421 lat (msec) : 4=0.25%, 10=34.49%, 20=10.98%, 50=17.62%, 100=30.48% 00:16:55.421 lat (msec) : 250=6.19% 00:16:55.421 cpu : usr=0.86%, sys=0.30%, ctx=3268, majf=0, minf=3 00:16:55.421 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.422 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.422 issued rwts: total=960,1061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.422 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.422 job69: (groupid=0, jobs=1): err= 0: pid=75106: Wed Jul 24 05:06:09 2024 00:16:55.422 read: IOPS=110, BW=13.9MiB/s (14.5MB/s)(120MiB/8659msec) 00:16:55.422 slat (usec): min=5, max=581, avg=34.33, stdev=59.88 00:16:55.422 clat (usec): min=3126, max=63979, avg=10470.83, stdev=8351.41 00:16:55.422 lat (usec): min=3143, max=63994, avg=10505.16, stdev=8354.68 00:16:55.422 clat percentiles (usec): 00:16:55.422 | 1.00th=[ 3621], 5.00th=[ 3949], 10.00th=[ 4424], 20.00th=[ 5669], 00:16:55.422 | 30.00th=[ 6980], 40.00th=[ 7439], 50.00th=[ 8160], 60.00th=[ 9110], 00:16:55.422 | 70.00th=[10552], 80.00th=[12125], 90.00th=[17695], 95.00th=[22938], 00:16:55.422 | 99.00th=[57934], 99.50th=[60556], 99.90th=[64226], 99.95th=[64226], 00:16:55.422 | 99.99th=[64226] 00:16:55.422 write: IOPS=113, BW=14.1MiB/s (14.8MB/s)(124MiB/8750msec); 0 zone resets 00:16:55.422 slat (usec): min=40, max=5832, avg=138.34, stdev=280.23 00:16:55.422 clat (msec): min=33, max=218, avg=70.15, stdev=28.07 00:16:55.422 lat (msec): min=33, max=218, avg=70.29, stdev=28.07 00:16:55.422 clat percentiles (msec): 00:16:55.422 | 1.00th=[ 39], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 47], 00:16:55.422 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 63], 60.00th=[ 68], 00:16:55.422 | 70.00th=[ 78], 80.00th=[ 90], 90.00th=[ 109], 95.00th=[ 131], 00:16:55.422 | 99.00th=[ 159], 99.50th=[ 186], 99.90th=[ 220], 99.95th=[ 220], 00:16:55.422 | 99.99th=[ 220] 00:16:55.422 bw ( KiB/s): min= 4096, max=22016, per=1.18%, avg=12583.15, stdev=5054.14, samples=20 00:16:55.422 iops : min= 32, max= 172, avg=98.20, stdev=39.49, samples=20 00:16:55.422 lat (msec) : 4=2.72%, 10=29.08%, 20=13.54%, 50=16.92%, 100=30.46% 00:16:55.422 lat (msec) : 250=7.28% 00:16:55.422 cpu : usr=0.78%, sys=0.34%, ctx=3001, majf=0, minf=3 00:16:55.422 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.422 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.422 issued rwts: total=960,990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.422 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.422 job70: (groupid=0, jobs=1): err= 0: pid=75107: Wed Jul 24 05:06:09 2024 00:16:55.422 read: IOPS=76, BW=9849KiB/s (10.1MB/s)(80.0MiB/8318msec) 00:16:55.422 slat (usec): min=7, max=1194, avg=47.37, stdev=99.00 00:16:55.422 clat (msec): min=7, max=244, avg=19.58, stdev=25.23 00:16:55.422 lat (msec): min=7, max=244, avg=19.63, stdev=25.23 00:16:55.422 clat percentiles (msec): 00:16:55.422 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:16:55.422 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 18], 00:16:55.422 | 70.00th=[ 21], 80.00th=[ 22], 90.00th=[ 26], 95.00th=[ 32], 00:16:55.422 | 99.00th=[ 230], 99.50th=[ 236], 99.90th=[ 245], 99.95th=[ 245], 00:16:55.422 | 99.99th=[ 245] 00:16:55.422 write: IOPS=87, BW=10.9MiB/s (11.5MB/s)(92.6MiB/8469msec); 0 zone resets 00:16:55.422 slat (usec): min=42, max=5706, avg=133.62, stdev=304.34 00:16:55.422 clat (msec): min=41, max=348, avg=90.52, stdev=40.44 00:16:55.422 lat (msec): min=41, max=348, avg=90.65, stdev=40.45 00:16:55.422 clat percentiles (msec): 00:16:55.422 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 66], 00:16:55.422 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 86], 00:16:55.422 | 70.00th=[ 94], 80.00th=[ 104], 90.00th=[ 126], 95.00th=[ 165], 00:16:55.422 | 99.00th=[ 268], 99.50th=[ 326], 99.90th=[ 351], 99.95th=[ 351], 00:16:55.422 | 99.99th=[ 351] 00:16:55.422 bw ( KiB/s): min= 2048, max=14848, per=0.88%, avg=9390.90, stdev=4594.11, samples=20 00:16:55.422 iops : min= 16, max= 116, avg=73.20, stdev=35.86, samples=20 00:16:55.422 lat (msec) : 10=3.04%, 20=29.69%, 50=13.54%, 100=40.41%, 250=12.38% 00:16:55.422 lat (msec) : 500=0.94% 00:16:55.422 cpu : usr=0.55%, sys=0.24%, ctx=2141, majf=0, minf=5 00:16:55.422 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.422 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.422 issued rwts: total=640,741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.422 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.422 job71: (groupid=0, jobs=1): err= 0: pid=75109: Wed Jul 24 05:06:09 2024 00:16:55.423 read: IOPS=75, BW=9660KiB/s (9892kB/s)(80.0MiB/8480msec) 00:16:55.423 slat (usec): min=7, max=1105, avg=46.06, stdev=92.95 00:16:55.423 clat (msec): min=5, max=136, avg=18.04, stdev=15.03 00:16:55.423 lat (msec): min=5, max=136, avg=18.08, stdev=15.03 00:16:55.423 clat percentiles (msec): 00:16:55.423 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:16:55.423 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 17], 00:16:55.423 | 70.00th=[ 19], 80.00th=[ 22], 90.00th=[ 28], 95.00th=[ 40], 00:16:55.423 | 99.00th=[ 118], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 136], 00:16:55.423 | 99.99th=[ 136] 00:16:55.423 write: IOPS=87, BW=10.9MiB/s (11.5MB/s)(94.2MiB/8608msec); 0 zone resets 00:16:55.423 slat (usec): min=27, max=5724, avg=145.45, stdev=338.30 00:16:55.423 clat (msec): min=22, max=290, avg=90.54, stdev=37.45 00:16:55.423 lat (msec): min=22, max=290, avg=90.69, stdev=37.45 00:16:55.423 clat percentiles (msec): 00:16:55.423 | 1.00th=[ 29], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 66], 00:16:55.423 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:16:55.423 | 70.00th=[ 96], 80.00th=[ 115], 90.00th=[ 142], 95.00th=[ 159], 00:16:55.423 | 99.00th=[ 236], 99.50th=[ 279], 99.90th=[ 292], 99.95th=[ 292], 00:16:55.423 | 99.99th=[ 292] 00:16:55.423 bw ( KiB/s): min= 1277, max=15390, per=0.89%, avg=9538.90, stdev=4472.75, samples=20 00:16:55.423 iops : min= 9, max= 120, avg=74.20, stdev=34.95, samples=20 00:16:55.423 lat (msec) : 10=9.40%, 20=26.26%, 50=10.26%, 100=39.02%, 250=14.71% 00:16:55.423 lat (msec) : 500=0.36% 00:16:55.423 cpu : usr=0.56%, sys=0.28%, ctx=2186, majf=0, minf=7 00:16:55.423 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.423 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.423 issued rwts: total=640,754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.423 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.423 job72: (groupid=0, jobs=1): err= 0: pid=75113: Wed Jul 24 05:06:09 2024 00:16:55.423 read: IOPS=80, BW=10.0MiB/s (10.5MB/s)(80.0MiB/7981msec) 00:16:55.423 slat (usec): min=7, max=2415, avg=59.56, stdev=147.06 00:16:55.423 clat (usec): min=5929, max=77859, avg=15573.68, stdev=9396.58 00:16:55.423 lat (usec): min=5948, max=77874, avg=15633.24, stdev=9389.23 00:16:55.423 clat percentiles (usec): 00:16:55.423 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[10159], 00:16:55.423 | 30.00th=[11207], 40.00th=[12649], 50.00th=[13304], 60.00th=[14615], 00:16:55.423 | 70.00th=[15795], 80.00th=[17695], 90.00th=[20841], 95.00th=[31327], 00:16:55.423 | 99.00th=[63701], 99.50th=[71828], 99.90th=[78119], 99.95th=[78119], 00:16:55.423 | 99.99th=[78119] 00:16:55.423 write: IOPS=81, BW=10.2MiB/s (10.7MB/s)(89.8MiB/8794msec); 0 zone resets 00:16:55.423 slat (usec): min=48, max=17763, avg=160.40, stdev=702.05 00:16:55.423 clat (msec): min=42, max=299, avg=96.84, stdev=41.32 00:16:55.423 lat (msec): min=43, max=299, avg=97.00, stdev=41.30 00:16:55.423 clat percentiles (msec): 00:16:55.423 | 1.00th=[ 50], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 67], 00:16:55.423 | 30.00th=[ 71], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 91], 00:16:55.423 | 70.00th=[ 101], 80.00th=[ 122], 90.00th=[ 150], 95.00th=[ 186], 00:16:55.423 | 99.00th=[ 241], 99.50th=[ 268], 99.90th=[ 300], 99.95th=[ 300], 00:16:55.423 | 99.99th=[ 300] 00:16:55.423 bw ( KiB/s): min= 1792, max=14848, per=0.85%, avg=9096.75, stdev=4101.82, samples=20 00:16:55.423 iops : min= 14, max= 116, avg=70.95, stdev=32.06, samples=20 00:16:55.423 lat (msec) : 10=8.25%, 20=33.58%, 50=4.93%, 100=37.33%, 250=15.46% 00:16:55.423 lat (msec) : 500=0.44% 00:16:55.423 cpu : usr=0.51%, sys=0.29%, ctx=2245, majf=0, minf=9 00:16:55.423 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.423 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.423 issued rwts: total=640,718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.423 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.423 job73: (groupid=0, jobs=1): err= 0: pid=75114: Wed Jul 24 05:06:09 2024 00:16:55.424 read: IOPS=78, BW=9.79MiB/s (10.3MB/s)(80.0MiB/8171msec) 00:16:55.424 slat (usec): min=7, max=3146, avg=61.73, stdev=183.96 00:16:55.424 clat (usec): min=4114, max=71139, avg=16502.02, stdev=10349.55 00:16:55.424 lat (usec): min=4986, max=71162, avg=16563.75, stdev=10341.60 00:16:55.424 clat percentiles (usec): 00:16:55.424 | 1.00th=[ 5211], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 9634], 00:16:55.424 | 30.00th=[11338], 40.00th=[12387], 50.00th=[13698], 60.00th=[15401], 00:16:55.424 | 70.00th=[17957], 80.00th=[20841], 90.00th=[26608], 95.00th=[40633], 00:16:55.424 | 99.00th=[62653], 99.50th=[67634], 99.90th=[70779], 99.95th=[70779], 00:16:55.424 | 99.99th=[70779] 00:16:55.424 write: IOPS=76, BW=9765KiB/s (9999kB/s)(83.4MiB/8743msec); 0 zone resets 00:16:55.424 slat (usec): min=45, max=3257, avg=164.35, stdev=279.66 00:16:55.424 clat (msec): min=43, max=368, avg=103.77, stdev=48.30 00:16:55.424 lat (msec): min=43, max=369, avg=103.93, stdev=48.31 00:16:55.424 clat percentiles (msec): 00:16:55.424 | 1.00th=[ 46], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 67], 00:16:55.424 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 96], 00:16:55.424 | 70.00th=[ 120], 80.00th=[ 142], 90.00th=[ 171], 95.00th=[ 205], 00:16:55.424 | 99.00th=[ 266], 99.50th=[ 284], 99.90th=[ 368], 99.95th=[ 368], 00:16:55.424 | 99.99th=[ 368] 00:16:55.424 bw ( KiB/s): min= 2560, max=14307, per=0.83%, avg=8873.58, stdev=4049.43, samples=19 00:16:55.424 iops : min= 20, max= 111, avg=69.00, stdev=31.62, samples=19 00:16:55.424 lat (msec) : 10=11.09%, 20=26.63%, 50=11.02%, 100=32.44%, 250=17.98% 00:16:55.424 lat (msec) : 500=0.84% 00:16:55.424 cpu : usr=0.52%, sys=0.27%, ctx=2106, majf=0, minf=5 00:16:55.424 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.424 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.424 issued rwts: total=640,667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.424 job74: (groupid=0, jobs=1): err= 0: pid=75115: Wed Jul 24 05:06:09 2024 00:16:55.424 read: IOPS=79, BW=9.93MiB/s (10.4MB/s)(74.2MiB/7476msec) 00:16:55.424 slat (usec): min=7, max=567, avg=42.24, stdev=66.54 00:16:55.424 clat (msec): min=3, max=125, avg=16.06, stdev=15.38 00:16:55.424 lat (msec): min=3, max=125, avg=16.10, stdev=15.37 00:16:55.424 clat percentiles (msec): 00:16:55.424 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:16:55.424 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 14], 00:16:55.424 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 26], 95.00th=[ 37], 00:16:55.424 | 99.00th=[ 108], 99.50th=[ 120], 99.90th=[ 126], 99.95th=[ 126], 00:16:55.424 | 99.99th=[ 126] 00:16:55.424 write: IOPS=72, BW=9306KiB/s (9529kB/s)(80.0MiB/8803msec); 0 zone resets 00:16:55.424 slat (usec): min=46, max=27076, avg=177.18, stdev=1084.39 00:16:55.424 clat (msec): min=56, max=305, avg=109.13, stdev=41.04 00:16:55.424 lat (msec): min=59, max=305, avg=109.31, stdev=41.16 00:16:55.424 clat percentiles (msec): 00:16:55.424 | 1.00th=[ 61], 5.00th=[ 63], 10.00th=[ 68], 20.00th=[ 74], 00:16:55.424 | 30.00th=[ 81], 40.00th=[ 91], 50.00th=[ 101], 60.00th=[ 111], 00:16:55.424 | 70.00th=[ 124], 80.00th=[ 140], 90.00th=[ 161], 95.00th=[ 184], 00:16:55.424 | 99.00th=[ 255], 99.50th=[ 275], 99.90th=[ 305], 99.95th=[ 305], 00:16:55.424 | 99.99th=[ 305] 00:16:55.424 bw ( KiB/s): min= 1024, max=14818, per=0.77%, avg=8197.89, stdev=3389.31, samples=19 00:16:55.424 iops : min= 8, max= 115, avg=63.95, stdev=26.40, samples=19 00:16:55.424 lat (msec) : 4=0.08%, 10=17.75%, 20=21.07%, 50=7.46%, 100=27.07% 00:16:55.424 lat (msec) : 250=25.93%, 500=0.65% 00:16:55.424 cpu : usr=0.59%, sys=0.13%, ctx=2054, majf=0, minf=7 00:16:55.424 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.424 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.424 issued rwts: total=594,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.424 job75: (groupid=0, jobs=1): err= 0: pid=75117: Wed Jul 24 05:06:09 2024 00:16:55.424 read: IOPS=73, BW=9411KiB/s (9637kB/s)(80.0MiB/8705msec) 00:16:55.424 slat (usec): min=7, max=1729, avg=44.18, stdev=97.30 00:16:55.424 clat (msec): min=5, max=241, avg=22.18, stdev=27.80 00:16:55.424 lat (msec): min=6, max=241, avg=22.22, stdev=27.80 00:16:55.424 clat percentiles (msec): 00:16:55.424 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 13], 00:16:55.424 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 18], 00:16:55.424 | 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 28], 95.00th=[ 52], 00:16:55.424 | 99.00th=[ 232], 99.50th=[ 232], 99.90th=[ 241], 99.95th=[ 241], 00:16:55.424 | 99.99th=[ 241] 00:16:55.424 write: IOPS=88, BW=11.0MiB/s (11.6MB/s)(91.2MiB/8276msec); 0 zone resets 00:16:55.424 slat (usec): min=42, max=3072, avg=131.68, stdev=185.69 00:16:55.424 clat (msec): min=14, max=321, avg=89.90, stdev=39.77 00:16:55.424 lat (msec): min=14, max=321, avg=90.03, stdev=39.76 00:16:55.424 clat percentiles (msec): 00:16:55.424 | 1.00th=[ 15], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 65], 00:16:55.424 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 77], 60.00th=[ 82], 00:16:55.424 | 70.00th=[ 91], 80.00th=[ 108], 90.00th=[ 148], 95.00th=[ 176], 00:16:55.424 | 99.00th=[ 243], 99.50th=[ 271], 99.90th=[ 321], 99.95th=[ 321], 00:16:55.424 | 99.99th=[ 321] 00:16:55.424 bw ( KiB/s): min= 1024, max=15104, per=0.86%, avg=9232.85, stdev=4747.03, samples=20 00:16:55.424 iops : min= 8, max= 118, avg=71.85, stdev=37.14, samples=20 00:16:55.424 lat (msec) : 10=3.72%, 20=30.29%, 50=11.46%, 100=41.31%, 250=12.77% 00:16:55.424 lat (msec) : 500=0.44% 00:16:55.424 cpu : usr=0.54%, sys=0.31%, ctx=2121, majf=0, minf=5 00:16:55.424 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.424 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.424 issued rwts: total=640,730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.424 job76: (groupid=0, jobs=1): err= 0: pid=75118: Wed Jul 24 05:06:09 2024 00:16:55.424 read: IOPS=77, BW=9860KiB/s (10.1MB/s)(80.0MiB/8308msec) 00:16:55.424 slat (usec): min=7, max=1139, avg=43.54, stdev=91.79 00:16:55.424 clat (usec): min=8090, max=89711, avg=17241.21, stdev=9671.29 00:16:55.424 lat (usec): min=8263, max=89723, avg=17284.75, stdev=9666.19 00:16:55.424 clat percentiles (usec): 00:16:55.424 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[11207], 00:16:55.424 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13698], 60.00th=[16581], 00:16:55.424 | 70.00th=[19792], 80.00th=[22152], 90.00th=[26870], 95.00th=[30278], 00:16:55.424 | 99.00th=[72877], 99.50th=[81265], 99.90th=[89654], 99.95th=[89654], 00:16:55.424 | 99.99th=[89654] 00:16:55.424 write: IOPS=86, BW=10.8MiB/s (11.3MB/s)(93.0MiB/8650msec); 0 zone resets 00:16:55.424 slat (usec): min=45, max=7324, avg=150.84, stdev=365.32 00:16:55.424 clat (msec): min=39, max=377, avg=92.01, stdev=41.26 00:16:55.424 lat (msec): min=39, max=378, avg=92.16, stdev=41.24 00:16:55.424 clat percentiles (msec): 00:16:55.424 | 1.00th=[ 55], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 66], 00:16:55.424 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 86], 00:16:55.424 | 70.00th=[ 94], 80.00th=[ 110], 90.00th=[ 138], 95.00th=[ 176], 00:16:55.424 | 99.00th=[ 257], 99.50th=[ 279], 99.90th=[ 380], 99.95th=[ 380], 00:16:55.424 | 99.99th=[ 380] 00:16:55.424 bw ( KiB/s): min= 1792, max=15104, per=0.88%, avg=9431.45, stdev=4642.35, samples=20 00:16:55.424 iops : min= 14, max= 118, avg=73.55, stdev=36.35, samples=20 00:16:55.424 lat (msec) : 10=3.83%, 20=29.19%, 50=12.93%, 100=40.25%, 250=13.15% 00:16:55.424 lat (msec) : 500=0.65% 00:16:55.424 cpu : usr=0.54%, sys=0.27%, ctx=2131, majf=0, minf=1 00:16:55.424 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.424 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.424 issued rwts: total=640,744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.424 job77: (groupid=0, jobs=1): err= 0: pid=75119: Wed Jul 24 05:06:09 2024 00:16:55.424 read: IOPS=60, BW=7724KiB/s (7910kB/s)(60.0MiB/7954msec) 00:16:55.424 slat (usec): min=8, max=1177, avg=51.11, stdev=105.19 00:16:55.424 clat (msec): min=4, max=149, avg=21.17, stdev=20.31 00:16:55.424 lat (msec): min=4, max=149, avg=21.22, stdev=20.30 00:16:55.424 clat percentiles (msec): 00:16:55.424 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:16:55.424 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 19], 00:16:55.424 | 70.00th=[ 21], 80.00th=[ 24], 90.00th=[ 36], 95.00th=[ 64], 00:16:55.424 | 99.00th=[ 114], 99.50th=[ 132], 99.90th=[ 150], 99.95th=[ 150], 00:16:55.424 | 99.99th=[ 150] 00:16:55.424 write: IOPS=69, BW=8890KiB/s (9104kB/s)(76.4MiB/8797msec); 0 zone resets 00:16:55.424 slat (usec): min=41, max=6720, avg=130.75, stdev=297.89 00:16:55.424 clat (msec): min=40, max=369, avg=114.43, stdev=48.34 00:16:55.424 lat (msec): min=40, max=369, avg=114.56, stdev=48.35 00:16:55.424 clat percentiles (msec): 00:16:55.424 | 1.00th=[ 44], 5.00th=[ 63], 10.00th=[ 67], 20.00th=[ 74], 00:16:55.424 | 30.00th=[ 83], 40.00th=[ 91], 50.00th=[ 107], 60.00th=[ 121], 00:16:55.424 | 70.00th=[ 131], 80.00th=[ 146], 90.00th=[ 169], 95.00th=[ 203], 00:16:55.424 | 99.00th=[ 275], 99.50th=[ 334], 99.90th=[ 368], 99.95th=[ 368], 00:16:55.424 | 99.99th=[ 368] 00:16:55.424 bw ( KiB/s): min= 1792, max=13128, per=0.72%, avg=7707.10, stdev=3437.40, samples=20 00:16:55.424 iops : min= 14, max= 102, avg=59.90, stdev=26.93, samples=20 00:16:55.424 lat (msec) : 10=7.88%, 20=22.55%, 50=11.92%, 100=26.58%, 250=29.88% 00:16:55.424 lat (msec) : 500=1.19% 00:16:55.424 cpu : usr=0.49%, sys=0.17%, ctx=1787, majf=0, minf=9 00:16:55.424 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.424 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.424 issued rwts: total=480,611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.425 job78: (groupid=0, jobs=1): err= 0: pid=75120: Wed Jul 24 05:06:09 2024 00:16:55.425 read: IOPS=75, BW=9662KiB/s (9893kB/s)(80.0MiB/8479msec) 00:16:55.425 slat (usec): min=7, max=1101, avg=47.27, stdev=106.36 00:16:55.425 clat (msec): min=4, max=108, avg=19.81, stdev=13.56 00:16:55.425 lat (msec): min=4, max=108, avg=19.86, stdev=13.55 00:16:55.425 clat percentiles (msec): 00:16:55.425 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 12], 00:16:55.425 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 19], 00:16:55.425 | 70.00th=[ 22], 80.00th=[ 26], 90.00th=[ 33], 95.00th=[ 43], 00:16:55.425 | 99.00th=[ 85], 99.50th=[ 96], 99.90th=[ 109], 99.95th=[ 109], 00:16:55.425 | 99.99th=[ 109] 00:16:55.425 write: IOPS=81, BW=10.2MiB/s (10.7MB/s)(86.4MiB/8441msec); 0 zone resets 00:16:55.425 slat (usec): min=38, max=8284, avg=137.04, stdev=344.46 00:16:55.425 clat (msec): min=42, max=305, avg=96.64, stdev=40.72 00:16:55.425 lat (msec): min=42, max=306, avg=96.77, stdev=40.72 00:16:55.425 clat percentiles (msec): 00:16:55.425 | 1.00th=[ 59], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 66], 00:16:55.425 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 90], 00:16:55.425 | 70.00th=[ 108], 80.00th=[ 128], 90.00th=[ 150], 95.00th=[ 180], 00:16:55.425 | 99.00th=[ 247], 99.50th=[ 296], 99.90th=[ 305], 99.95th=[ 305], 00:16:55.425 | 99.99th=[ 305] 00:16:55.425 bw ( KiB/s): min= 1792, max=14848, per=0.82%, avg=8755.10, stdev=4436.35, samples=20 00:16:55.425 iops : min= 14, max= 116, avg=68.30, stdev=34.68, samples=20 00:16:55.425 lat (msec) : 10=7.59%, 20=23.89%, 50=15.10%, 100=35.91%, 250=17.05% 00:16:55.425 lat (msec) : 500=0.45% 00:16:55.425 cpu : usr=0.58%, sys=0.20%, ctx=2167, majf=0, minf=7 00:16:55.425 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.425 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.425 issued rwts: total=640,691,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.425 job79: (groupid=0, jobs=1): err= 0: pid=75121: Wed Jul 24 05:06:09 2024 00:16:55.425 read: IOPS=77, BW=9951KiB/s (10.2MB/s)(80.0MiB/8232msec) 00:16:55.425 slat (usec): min=7, max=764, avg=42.61, stdev=70.72 00:16:55.425 clat (msec): min=6, max=114, avg=18.08, stdev=12.70 00:16:55.425 lat (msec): min=6, max=114, avg=18.13, stdev=12.71 00:16:55.425 clat percentiles (msec): 00:16:55.425 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:16:55.425 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 18], 00:16:55.425 | 70.00th=[ 21], 80.00th=[ 23], 90.00th=[ 28], 95.00th=[ 37], 00:16:55.425 | 99.00th=[ 86], 99.50th=[ 108], 99.90th=[ 115], 99.95th=[ 115], 00:16:55.425 | 99.99th=[ 115] 00:16:55.425 write: IOPS=81, BW=10.2MiB/s (10.7MB/s)(87.9MiB/8604msec); 0 zone resets 00:16:55.425 slat (usec): min=38, max=8214, avg=138.82, stdev=338.67 00:16:55.425 clat (msec): min=12, max=355, avg=96.77, stdev=41.67 00:16:55.425 lat (msec): min=12, max=356, avg=96.91, stdev=41.67 00:16:55.425 clat percentiles (msec): 00:16:55.425 | 1.00th=[ 22], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 66], 00:16:55.425 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 94], 00:16:55.425 | 70.00th=[ 115], 80.00th=[ 129], 90.00th=[ 148], 95.00th=[ 171], 00:16:55.425 | 99.00th=[ 241], 99.50th=[ 253], 99.90th=[ 355], 99.95th=[ 355], 00:16:55.425 | 99.99th=[ 355] 00:16:55.425 bw ( KiB/s): min= 2039, max=15616, per=0.83%, avg=8888.80, stdev=4558.42, samples=20 00:16:55.425 iops : min= 15, max= 122, avg=69.10, stdev=35.66, samples=20 00:16:55.425 lat (msec) : 10=9.46%, 20=23.98%, 50=14.37%, 100=32.84%, 250=18.91% 00:16:55.425 lat (msec) : 500=0.45% 00:16:55.425 cpu : usr=0.53%, sys=0.27%, ctx=2180, majf=0, minf=3 00:16:55.425 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.425 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.425 issued rwts: total=640,703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.425 job80: (groupid=0, jobs=1): err= 0: pid=75122: Wed Jul 24 05:06:09 2024 00:16:55.425 read: IOPS=81, BW=10.2MiB/s (10.7MB/s)(80.0MiB/7845msec) 00:16:55.425 slat (usec): min=7, max=969, avg=37.86, stdev=74.21 00:16:55.425 clat (msec): min=3, max=179, avg=12.18, stdev=17.86 00:16:55.425 lat (msec): min=3, max=179, avg=12.22, stdev=17.86 00:16:55.425 clat percentiles (msec): 00:16:55.425 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:16:55.425 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 11], 00:16:55.425 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 17], 95.00th=[ 24], 00:16:55.425 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 180], 00:16:55.425 | 99.99th=[ 180] 00:16:55.425 write: IOPS=74, BW=9506KiB/s (9734kB/s)(84.0MiB/9049msec); 0 zone resets 00:16:55.425 slat (usec): min=38, max=1001, avg=121.73, stdev=134.47 00:16:55.425 clat (msec): min=40, max=274, avg=107.04, stdev=37.57 00:16:55.425 lat (msec): min=40, max=274, avg=107.16, stdev=37.58 00:16:55.425 clat percentiles (msec): 00:16:55.425 | 1.00th=[ 58], 5.00th=[ 65], 10.00th=[ 67], 20.00th=[ 74], 00:16:55.425 | 30.00th=[ 82], 40.00th=[ 89], 50.00th=[ 99], 60.00th=[ 108], 00:16:55.425 | 70.00th=[ 122], 80.00th=[ 138], 90.00th=[ 163], 95.00th=[ 180], 00:16:55.425 | 99.00th=[ 215], 99.50th=[ 236], 99.90th=[ 275], 99.95th=[ 275], 00:16:55.425 | 99.99th=[ 275] 00:16:55.425 bw ( KiB/s): min= 2810, max=13824, per=0.81%, avg=8621.53, stdev=3019.86, samples=19 00:16:55.425 iops : min= 21, max= 108, avg=67.21, stdev=23.79, samples=19 00:16:55.425 lat (msec) : 4=0.08%, 10=29.12%, 20=16.08%, 50=2.90%, 100=26.45% 00:16:55.425 lat (msec) : 250=25.30%, 500=0.08% 00:16:55.425 cpu : usr=0.55%, sys=0.22%, ctx=2031, majf=0, minf=1 00:16:55.425 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.425 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.425 issued rwts: total=640,672,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.425 job81: (groupid=0, jobs=1): err= 0: pid=75128: Wed Jul 24 05:06:09 2024 00:16:55.425 read: IOPS=79, BW=9.92MiB/s (10.4MB/s)(80.0MiB/8065msec) 00:16:55.425 slat (usec): min=8, max=1587, avg=50.48, stdev=106.35 00:16:55.425 clat (msec): min=4, max=107, avg=13.14, stdev=11.19 00:16:55.425 lat (msec): min=4, max=107, avg=13.19, stdev=11.19 00:16:55.425 clat percentiles (msec): 00:16:55.425 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:16:55.425 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:16:55.425 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 22], 95.00th=[ 26], 00:16:55.425 | 99.00th=[ 91], 99.50th=[ 103], 99.90th=[ 108], 99.95th=[ 108], 00:16:55.425 | 99.99th=[ 108] 00:16:55.425 write: IOPS=74, BW=9523KiB/s (9752kB/s)(83.6MiB/8992msec); 0 zone resets 00:16:55.425 slat (usec): min=48, max=15338, avg=151.29, stdev=614.84 00:16:55.425 clat (msec): min=31, max=316, avg=106.64, stdev=39.33 00:16:55.425 lat (msec): min=32, max=316, avg=106.79, stdev=39.30 00:16:55.425 clat percentiles (msec): 00:16:55.425 | 1.00th=[ 37], 5.00th=[ 63], 10.00th=[ 68], 20.00th=[ 79], 00:16:55.425 | 30.00th=[ 84], 40.00th=[ 90], 50.00th=[ 99], 60.00th=[ 109], 00:16:55.425 | 70.00th=[ 118], 80.00th=[ 125], 90.00th=[ 157], 95.00th=[ 186], 00:16:55.425 | 99.00th=[ 239], 99.50th=[ 309], 99.90th=[ 317], 99.95th=[ 317], 00:16:55.425 | 99.99th=[ 317] 00:16:55.425 bw ( KiB/s): min= 2816, max=14592, per=0.79%, avg=8455.90, stdev=2998.67, samples=20 00:16:55.425 iops : min= 22, max= 114, avg=66.00, stdev=23.38, samples=20 00:16:55.425 lat (msec) : 10=22.46%, 20=20.93%, 50=5.58%, 100=26.13%, 250=24.52% 00:16:55.425 lat (msec) : 500=0.38% 00:16:55.425 cpu : usr=0.52%, sys=0.26%, ctx=2174, majf=0, minf=3 00:16:55.425 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.425 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.425 issued rwts: total=640,669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.425 job82: (groupid=0, jobs=1): err= 0: pid=75130: Wed Jul 24 05:06:09 2024 00:16:55.425 read: IOPS=78, BW=9.87MiB/s (10.4MB/s)(80.0MiB/8103msec) 00:16:55.425 slat (usec): min=8, max=1632, avg=46.12, stdev=98.79 00:16:55.425 clat (usec): min=5637, max=37386, avg=13674.52, stdev=5833.74 00:16:55.425 lat (usec): min=6823, max=37448, avg=13720.64, stdev=5834.83 00:16:55.425 clat percentiles (usec): 00:16:55.425 | 1.00th=[ 7046], 5.00th=[ 7373], 10.00th=[ 7898], 20.00th=[ 8455], 00:16:55.425 | 30.00th=[ 9503], 40.00th=[11076], 50.00th=[12911], 60.00th=[13960], 00:16:55.425 | 70.00th=[15270], 80.00th=[17433], 90.00th=[21890], 95.00th=[25560], 00:16:55.425 | 99.00th=[32900], 99.50th=[34341], 99.90th=[37487], 99.95th=[37487], 00:16:55.425 | 99.99th=[37487] 00:16:55.425 write: IOPS=81, BW=10.2MiB/s (10.7MB/s)(91.6MiB/8959msec); 0 zone resets 00:16:55.425 slat (usec): min=45, max=27235, avg=174.43, stdev=1038.58 00:16:55.425 clat (msec): min=48, max=341, avg=96.58, stdev=43.04 00:16:55.425 lat (msec): min=48, max=341, avg=96.75, stdev=43.01 00:16:55.425 clat percentiles (msec): 00:16:55.425 | 1.00th=[ 55], 5.00th=[ 62], 10.00th=[ 62], 20.00th=[ 67], 00:16:55.425 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 88], 00:16:55.425 | 70.00th=[ 102], 80.00th=[ 117], 90.00th=[ 146], 95.00th=[ 190], 00:16:55.425 | 99.00th=[ 266], 99.50th=[ 313], 99.90th=[ 342], 99.95th=[ 342], 00:16:55.425 | 99.99th=[ 342] 00:16:55.425 bw ( KiB/s): min= 2560, max=14621, per=0.87%, avg=9288.60, stdev=3730.89, samples=20 00:16:55.425 iops : min= 20, max= 114, avg=72.50, stdev=29.18, samples=20 00:16:55.425 lat (msec) : 10=15.29%, 20=25.35%, 50=6.19%, 100=37.00%, 250=15.51% 00:16:55.425 lat (msec) : 500=0.66% 00:16:55.425 cpu : usr=0.55%, sys=0.25%, ctx=2167, majf=0, minf=3 00:16:55.425 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.426 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.426 issued rwts: total=640,733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.426 job83: (groupid=0, jobs=1): err= 0: pid=75131: Wed Jul 24 05:06:09 2024 00:16:55.426 read: IOPS=79, BW=9.91MiB/s (10.4MB/s)(80.0MiB/8070msec) 00:16:55.426 slat (usec): min=7, max=1046, avg=46.39, stdev=95.80 00:16:55.426 clat (msec): min=6, max=102, avg=13.82, stdev=10.08 00:16:55.426 lat (msec): min=6, max=102, avg=13.86, stdev=10.08 00:16:55.426 clat percentiles (msec): 00:16:55.426 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:16:55.426 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:16:55.426 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 19], 95.00th=[ 23], 00:16:55.426 | 99.00th=[ 92], 99.50th=[ 100], 99.90th=[ 103], 99.95th=[ 103], 00:16:55.426 | 99.99th=[ 103] 00:16:55.426 write: IOPS=78, BW=9994KiB/s (10.2MB/s)(87.1MiB/8927msec); 0 zone resets 00:16:55.426 slat (usec): min=43, max=27793, avg=167.45, stdev=1058.75 00:16:55.426 clat (msec): min=55, max=365, avg=101.30, stdev=44.46 00:16:55.426 lat (msec): min=56, max=365, avg=101.47, stdev=44.45 00:16:55.426 clat percentiles (msec): 00:16:55.426 | 1.00th=[ 58], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 70], 00:16:55.426 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 86], 60.00th=[ 95], 00:16:55.426 | 70.00th=[ 111], 80.00th=[ 129], 90.00th=[ 165], 95.00th=[ 186], 00:16:55.426 | 99.00th=[ 259], 99.50th=[ 321], 99.90th=[ 368], 99.95th=[ 368], 00:16:55.426 | 99.99th=[ 368] 00:16:55.426 bw ( KiB/s): min= 2304, max=14592, per=0.83%, avg=8831.00, stdev=3981.85, samples=20 00:16:55.426 iops : min= 18, max= 114, avg=68.90, stdev=31.17, samples=20 00:16:55.426 lat (msec) : 10=13.69%, 20=30.22%, 50=3.37%, 100=33.81%, 250=18.32% 00:16:55.426 lat (msec) : 500=0.60% 00:16:55.426 cpu : usr=0.57%, sys=0.22%, ctx=2072, majf=0, minf=7 00:16:55.426 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.426 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.426 issued rwts: total=640,697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.426 job84: (groupid=0, jobs=1): err= 0: pid=75132: Wed Jul 24 05:06:09 2024 00:16:55.426 read: IOPS=73, BW=9417KiB/s (9643kB/s)(80.0MiB/8699msec) 00:16:55.426 slat (usec): min=8, max=1084, avg=52.82, stdev=97.90 00:16:55.426 clat (msec): min=6, max=469, avg=24.66, stdev=50.17 00:16:55.426 lat (msec): min=6, max=469, avg=24.72, stdev=50.17 00:16:55.426 clat percentiles (msec): 00:16:55.426 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:16:55.426 | 30.00th=[ 13], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 18], 00:16:55.426 | 70.00th=[ 20], 80.00th=[ 25], 90.00th=[ 32], 95.00th=[ 40], 00:16:55.426 | 99.00th=[ 443], 99.50th=[ 460], 99.90th=[ 468], 99.95th=[ 468], 00:16:55.426 | 99.99th=[ 468] 00:16:55.426 write: IOPS=94, BW=11.8MiB/s (12.4MB/s)(95.9MiB/8122msec); 0 zone resets 00:16:55.426 slat (usec): min=44, max=17079, avg=172.68, stdev=775.39 00:16:55.426 clat (msec): min=5, max=361, avg=83.92, stdev=36.09 00:16:55.426 lat (msec): min=5, max=361, avg=84.09, stdev=36.01 00:16:55.426 clat percentiles (msec): 00:16:55.426 | 1.00th=[ 11], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 65], 00:16:55.426 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:16:55.426 | 70.00th=[ 90], 80.00th=[ 100], 90.00th=[ 118], 95.00th=[ 136], 00:16:55.426 | 99.00th=[ 222], 99.50th=[ 342], 99.90th=[ 363], 99.95th=[ 363], 00:16:55.426 | 99.99th=[ 363] 00:16:55.426 bw ( KiB/s): min= 1792, max=16384, per=1.01%, avg=10747.61, stdev=4039.52, samples=18 00:16:55.426 iops : min= 14, max= 128, avg=83.61, stdev=31.48, samples=18 00:16:55.426 lat (msec) : 10=8.24%, 20=26.37%, 50=11.94%, 100=42.08%, 250=10.45% 00:16:55.426 lat (msec) : 500=0.92% 00:16:55.426 cpu : usr=0.59%, sys=0.25%, ctx=2237, majf=0, minf=5 00:16:55.426 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.426 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.426 issued rwts: total=640,767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.426 job85: (groupid=0, jobs=1): err= 0: pid=75133: Wed Jul 24 05:06:09 2024 00:16:55.426 read: IOPS=78, BW=9.78MiB/s (10.3MB/s)(80.0MiB/8177msec) 00:16:55.426 slat (usec): min=7, max=970, avg=54.65, stdev=110.17 00:16:55.426 clat (usec): min=5133, max=39107, avg=13248.51, stdev=5414.10 00:16:55.426 lat (usec): min=5254, max=39125, avg=13303.15, stdev=5404.30 00:16:55.426 clat percentiles (usec): 00:16:55.426 | 1.00th=[ 5735], 5.00th=[ 6718], 10.00th=[ 7635], 20.00th=[ 8848], 00:16:55.426 | 30.00th=[ 9372], 40.00th=[11338], 50.00th=[12256], 60.00th=[13435], 00:16:55.426 | 70.00th=[14746], 80.00th=[16581], 90.00th=[20841], 95.00th=[24773], 00:16:55.426 | 99.00th=[28967], 99.50th=[30540], 99.90th=[39060], 99.95th=[39060], 00:16:55.426 | 99.99th=[39060] 00:16:55.426 write: IOPS=80, BW=10.0MiB/s (10.5MB/s)(90.4MiB/9007msec); 0 zone resets 00:16:55.426 slat (usec): min=39, max=30963, avg=182.42, stdev=1163.80 00:16:55.426 clat (msec): min=9, max=399, avg=98.71, stdev=44.92 00:16:55.426 lat (msec): min=9, max=399, avg=98.90, stdev=44.90 00:16:55.426 clat percentiles (msec): 00:16:55.426 | 1.00th=[ 42], 5.00th=[ 63], 10.00th=[ 65], 20.00th=[ 69], 00:16:55.426 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 83], 60.00th=[ 90], 00:16:55.426 | 70.00th=[ 106], 80.00th=[ 127], 90.00th=[ 153], 95.00th=[ 184], 00:16:55.426 | 99.00th=[ 271], 99.50th=[ 305], 99.90th=[ 401], 99.95th=[ 401], 00:16:55.426 | 99.99th=[ 401] 00:16:55.426 bw ( KiB/s): min= 3832, max=14080, per=0.86%, avg=9141.90, stdev=3544.50, samples=20 00:16:55.426 iops : min= 29, max= 110, avg=71.25, stdev=27.79, samples=20 00:16:55.426 lat (msec) : 10=15.99%, 20=25.75%, 50=5.94%, 100=34.63%, 250=17.02% 00:16:55.426 lat (msec) : 500=0.66% 00:16:55.426 cpu : usr=0.58%, sys=0.24%, ctx=2232, majf=0, minf=3 00:16:55.426 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.426 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.426 issued rwts: total=640,723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.426 job86: (groupid=0, jobs=1): err= 0: pid=75134: Wed Jul 24 05:06:09 2024 00:16:55.426 read: IOPS=75, BW=9650KiB/s (9882kB/s)(80.0MiB/8489msec) 00:16:55.426 slat (usec): min=7, max=897, avg=40.13, stdev=76.10 00:16:55.426 clat (msec): min=7, max=140, avg=16.71, stdev=14.59 00:16:55.426 lat (msec): min=7, max=140, avg=16.75, stdev=14.59 00:16:55.426 clat percentiles (msec): 00:16:55.426 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:16:55.426 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:16:55.426 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 26], 95.00th=[ 37], 00:16:55.426 | 99.00th=[ 111], 99.50th=[ 120], 99.90th=[ 142], 99.95th=[ 142], 00:16:55.426 | 99.99th=[ 142] 00:16:55.426 write: IOPS=85, BW=10.7MiB/s (11.3MB/s)(93.5MiB/8713msec); 0 zone resets 00:16:55.426 slat (usec): min=27, max=9196, avg=152.13, stdev=421.14 00:16:55.426 clat (msec): min=7, max=259, avg=92.35, stdev=36.02 00:16:55.426 lat (msec): min=7, max=259, avg=92.50, stdev=36.02 00:16:55.426 clat percentiles (msec): 00:16:55.426 | 1.00th=[ 14], 5.00th=[ 61], 10.00th=[ 65], 20.00th=[ 70], 00:16:55.426 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 88], 00:16:55.426 | 70.00th=[ 95], 80.00th=[ 111], 90.00th=[ 133], 95.00th=[ 165], 00:16:55.426 | 99.00th=[ 230], 99.50th=[ 239], 99.90th=[ 259], 99.95th=[ 259], 00:16:55.426 | 99.99th=[ 259] 00:16:55.426 bw ( KiB/s): min= 1792, max=16128, per=0.89%, avg=9469.10, stdev=4131.22, samples=20 00:16:55.426 iops : min= 14, max= 126, avg=73.85, stdev=32.29, samples=20 00:16:55.426 lat (msec) : 10=9.94%, 20=29.18%, 50=6.99%, 100=39.05%, 250=14.77% 00:16:55.426 lat (msec) : 500=0.07% 00:16:55.426 cpu : usr=0.56%, sys=0.27%, ctx=2172, majf=0, minf=1 00:16:55.426 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.426 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.426 issued rwts: total=640,748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.426 job87: (groupid=0, jobs=1): err= 0: pid=75135: Wed Jul 24 05:06:09 2024 00:16:55.426 read: IOPS=75, BW=9722KiB/s (9956kB/s)(80.0MiB/8426msec) 00:16:55.426 slat (usec): min=7, max=938, avg=50.80, stdev=98.86 00:16:55.426 clat (msec): min=6, max=206, avg=18.73, stdev=22.07 00:16:55.426 lat (msec): min=6, max=206, avg=18.78, stdev=22.09 00:16:55.426 clat percentiles (msec): 00:16:55.426 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:16:55.426 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 15], 00:16:55.426 | 70.00th=[ 17], 80.00th=[ 22], 90.00th=[ 33], 95.00th=[ 46], 00:16:55.426 | 99.00th=[ 142], 99.50th=[ 197], 99.90th=[ 207], 99.95th=[ 207], 00:16:55.426 | 99.99th=[ 207] 00:16:55.426 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(94.9MiB/8545msec); 0 zone resets 00:16:55.426 slat (usec): min=42, max=9157, avg=132.18, stdev=371.81 00:16:55.426 clat (msec): min=13, max=296, avg=89.12, stdev=32.12 00:16:55.426 lat (msec): min=13, max=296, avg=89.26, stdev=32.11 00:16:55.426 clat percentiles (msec): 00:16:55.426 | 1.00th=[ 41], 5.00th=[ 63], 10.00th=[ 65], 20.00th=[ 70], 00:16:55.426 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 83], 60.00th=[ 87], 00:16:55.426 | 70.00th=[ 94], 80.00th=[ 102], 90.00th=[ 116], 95.00th=[ 140], 00:16:55.426 | 99.00th=[ 255], 99.50th=[ 279], 99.90th=[ 296], 99.95th=[ 296], 00:16:55.426 | 99.99th=[ 296] 00:16:55.426 bw ( KiB/s): min= 1532, max=15616, per=0.90%, avg=9611.35, stdev=4204.91, samples=20 00:16:55.426 iops : min= 11, max= 122, avg=74.95, stdev=32.92, samples=20 00:16:55.426 lat (msec) : 10=13.51%, 20=23.16%, 50=8.36%, 100=42.96%, 250=11.44% 00:16:55.426 lat (msec) : 500=0.57% 00:16:55.426 cpu : usr=0.60%, sys=0.22%, ctx=2187, majf=0, minf=3 00:16:55.427 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.427 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.427 issued rwts: total=640,759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.427 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.427 job88: (groupid=0, jobs=1): err= 0: pid=75136: Wed Jul 24 05:06:09 2024 00:16:55.427 read: IOPS=78, BW=9.85MiB/s (10.3MB/s)(80.0MiB/8122msec) 00:16:55.427 slat (usec): min=7, max=1137, avg=41.25, stdev=76.92 00:16:55.427 clat (usec): min=6884, max=47233, avg=12359.09, stdev=4508.84 00:16:55.427 lat (usec): min=6985, max=47248, avg=12400.34, stdev=4515.29 00:16:55.427 clat percentiles (usec): 00:16:55.427 | 1.00th=[ 7242], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8586], 00:16:55.427 | 30.00th=[ 9241], 40.00th=[10552], 50.00th=[11469], 60.00th=[12649], 00:16:55.427 | 70.00th=[13698], 80.00th=[14877], 90.00th=[17171], 95.00th=[19792], 00:16:55.427 | 99.00th=[29492], 99.50th=[36439], 99.90th=[47449], 99.95th=[47449], 00:16:55.427 | 99.99th=[47449] 00:16:55.427 write: IOPS=83, BW=10.4MiB/s (10.9MB/s)(94.0MiB/9049msec); 0 zone resets 00:16:55.427 slat (usec): min=42, max=6655, avg=145.16, stdev=369.12 00:16:55.427 clat (msec): min=40, max=333, avg=95.40, stdev=35.01 00:16:55.427 lat (msec): min=40, max=333, avg=95.55, stdev=35.00 00:16:55.427 clat percentiles (msec): 00:16:55.427 | 1.00th=[ 47], 5.00th=[ 62], 10.00th=[ 66], 20.00th=[ 70], 00:16:55.427 | 30.00th=[ 77], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 93], 00:16:55.427 | 70.00th=[ 100], 80.00th=[ 115], 90.00th=[ 136], 95.00th=[ 178], 00:16:55.427 | 99.00th=[ 218], 99.50th=[ 236], 99.90th=[ 334], 99.95th=[ 334], 00:16:55.427 | 99.99th=[ 334] 00:16:55.427 bw ( KiB/s): min= 4087, max=13824, per=0.89%, avg=9536.00, stdev=3544.48, samples=20 00:16:55.427 iops : min= 31, max= 108, avg=74.40, stdev=27.72, samples=20 00:16:55.427 lat (msec) : 10=16.09%, 20=27.87%, 50=2.59%, 100=37.50%, 250=15.73% 00:16:55.427 lat (msec) : 500=0.22% 00:16:55.427 cpu : usr=0.58%, sys=0.24%, ctx=2185, majf=0, minf=1 00:16:55.427 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.427 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.427 issued rwts: total=640,752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.427 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.427 job89: (groupid=0, jobs=1): err= 0: pid=75137: Wed Jul 24 05:06:09 2024 00:16:55.427 read: IOPS=60, BW=7803KiB/s (7990kB/s)(61.0MiB/8005msec) 00:16:55.427 slat (usec): min=7, max=802, avg=50.92, stdev=85.70 00:16:55.427 clat (usec): min=5047, max=55813, avg=13817.01, stdev=8867.97 00:16:55.427 lat (usec): min=5063, max=55821, avg=13867.93, stdev=8866.60 00:16:55.427 clat percentiles (usec): 00:16:55.427 | 1.00th=[ 5080], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 7767], 00:16:55.427 | 30.00th=[ 8979], 40.00th=[10290], 50.00th=[12256], 60.00th=[12911], 00:16:55.427 | 70.00th=[14091], 80.00th=[16581], 90.00th=[21365], 95.00th=[32375], 00:16:55.427 | 99.00th=[55837], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:16:55.427 | 99.99th=[55837] 00:16:55.427 write: IOPS=69, BW=8943KiB/s (9158kB/s)(80.0MiB/9160msec); 0 zone resets 00:16:55.427 slat (usec): min=41, max=2245, avg=126.39, stdev=167.01 00:16:55.427 clat (msec): min=24, max=476, avg=113.86, stdev=58.39 00:16:55.427 lat (msec): min=24, max=476, avg=113.99, stdev=58.39 00:16:55.427 clat percentiles (msec): 00:16:55.427 | 1.00th=[ 31], 5.00th=[ 62], 10.00th=[ 67], 20.00th=[ 73], 00:16:55.427 | 30.00th=[ 82], 40.00th=[ 90], 50.00th=[ 101], 60.00th=[ 114], 00:16:55.427 | 70.00th=[ 125], 80.00th=[ 136], 90.00th=[ 174], 95.00th=[ 209], 00:16:55.427 | 99.00th=[ 397], 99.50th=[ 422], 99.90th=[ 477], 99.95th=[ 477], 00:16:55.427 | 99.99th=[ 477] 00:16:55.427 bw ( KiB/s): min= 2304, max=14592, per=0.77%, avg=8188.10, stdev=3364.74, samples=20 00:16:55.427 iops : min= 18, max= 114, avg=63.80, stdev=26.23, samples=20 00:16:55.427 lat (msec) : 10=15.96%, 20=21.72%, 50=5.67%, 100=28.10%, 250=26.95% 00:16:55.427 lat (msec) : 500=1.60% 00:16:55.427 cpu : usr=0.47%, sys=0.22%, ctx=1862, majf=0, minf=1 00:16:55.427 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.427 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.427 issued rwts: total=488,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.427 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.427 job90: (groupid=0, jobs=1): err= 0: pid=75138: Wed Jul 24 05:06:09 2024 00:16:55.427 read: IOPS=76, BW=9801KiB/s (10.0MB/s)(80.0MiB/8358msec) 00:16:55.427 slat (usec): min=7, max=1246, avg=51.14, stdev=120.59 00:16:55.427 clat (msec): min=5, max=155, avg=18.39, stdev=15.20 00:16:55.427 lat (msec): min=5, max=155, avg=18.44, stdev=15.21 00:16:55.427 clat percentiles (msec): 00:16:55.427 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 10], 00:16:55.427 | 30.00th=[ 12], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 19], 00:16:55.427 | 70.00th=[ 21], 80.00th=[ 24], 90.00th=[ 28], 95.00th=[ 35], 00:16:55.427 | 99.00th=[ 112], 99.50th=[ 130], 99.90th=[ 157], 99.95th=[ 157], 00:16:55.427 | 99.99th=[ 157] 00:16:55.427 write: IOPS=79, BW=9.90MiB/s (10.4MB/s)(84.2MiB/8512msec); 0 zone resets 00:16:55.427 slat (usec): min=42, max=2549, avg=137.19, stdev=218.65 00:16:55.427 clat (msec): min=41, max=335, avg=99.95, stdev=45.30 00:16:55.427 lat (msec): min=42, max=335, avg=100.09, stdev=45.32 00:16:55.427 clat percentiles (msec): 00:16:55.427 | 1.00th=[ 49], 5.00th=[ 63], 10.00th=[ 65], 20.00th=[ 68], 00:16:55.427 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 88], 60.00th=[ 96], 00:16:55.427 | 70.00th=[ 103], 80.00th=[ 125], 90.00th=[ 146], 95.00th=[ 207], 00:16:55.427 | 99.00th=[ 275], 99.50th=[ 300], 99.90th=[ 334], 99.95th=[ 334], 00:16:55.427 | 99.99th=[ 334] 00:16:55.427 bw ( KiB/s): min= 1792, max=14336, per=0.80%, avg=8523.15, stdev=4272.15, samples=20 00:16:55.427 iops : min= 14, max= 112, avg=66.45, stdev=33.49, samples=20 00:16:55.427 lat (msec) : 10=9.97%, 20=23.21%, 50=15.14%, 100=34.47%, 250=16.21% 00:16:55.427 lat (msec) : 500=0.99% 00:16:55.427 cpu : usr=0.58%, sys=0.20%, ctx=2086, majf=0, minf=7 00:16:55.427 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.427 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.427 issued rwts: total=640,674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.427 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.427 job91: (groupid=0, jobs=1): err= 0: pid=75139: Wed Jul 24 05:06:09 2024 00:16:55.427 read: IOPS=73, BW=9411KiB/s (9637kB/s)(80.0MiB/8705msec) 00:16:55.427 slat (usec): min=7, max=1016, avg=55.29, stdev=106.73 00:16:55.427 clat (msec): min=3, max=119, avg=13.21, stdev=12.59 00:16:55.427 lat (msec): min=3, max=119, avg=13.26, stdev=12.58 00:16:55.427 clat percentiles (msec): 00:16:55.427 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:16:55.427 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:16:55.427 | 70.00th=[ 13], 80.00th=[ 15], 90.00th=[ 21], 95.00th=[ 24], 00:16:55.427 | 99.00th=[ 110], 99.50th=[ 117], 99.90th=[ 120], 99.95th=[ 120], 00:16:55.427 | 99.99th=[ 120] 00:16:55.427 write: IOPS=83, BW=10.5MiB/s (11.0MB/s)(94.1MiB/8993msec); 0 zone resets 00:16:55.427 slat (usec): min=27, max=1888, avg=128.62, stdev=156.40 00:16:55.427 clat (msec): min=5, max=309, avg=94.98, stdev=43.91 00:16:55.427 lat (msec): min=5, max=309, avg=95.11, stdev=43.91 00:16:55.427 clat percentiles (msec): 00:16:55.427 | 1.00th=[ 14], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 67], 00:16:55.427 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 86], 00:16:55.427 | 70.00th=[ 96], 80.00th=[ 126], 90.00th=[ 161], 95.00th=[ 186], 00:16:55.427 | 99.00th=[ 257], 99.50th=[ 292], 99.90th=[ 309], 99.95th=[ 309], 00:16:55.427 | 99.99th=[ 309] 00:16:55.427 bw ( KiB/s): min= 3065, max=15903, per=0.89%, avg=9528.30, stdev=4311.89, samples=20 00:16:55.427 iops : min= 23, max= 124, avg=74.15, stdev=33.75, samples=20 00:16:55.427 lat (msec) : 4=0.14%, 10=21.39%, 20=20.32%, 50=4.81%, 100=38.05% 00:16:55.427 lat (msec) : 250=14.57%, 500=0.72% 00:16:55.427 cpu : usr=0.58%, sys=0.24%, ctx=2228, majf=0, minf=4 00:16:55.427 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.427 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.427 issued rwts: total=640,753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.427 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.427 job92: (groupid=0, jobs=1): err= 0: pid=75140: Wed Jul 24 05:06:09 2024 00:16:55.427 read: IOPS=59, BW=7615KiB/s (7798kB/s)(60.0MiB/8068msec) 00:16:55.427 slat (usec): min=7, max=1352, avg=59.41, stdev=131.28 00:16:55.427 clat (msec): min=4, max=145, avg=18.19, stdev=17.45 00:16:55.427 lat (msec): min=4, max=145, avg=18.25, stdev=17.44 00:16:55.427 clat percentiles (msec): 00:16:55.427 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 11], 00:16:55.427 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 17], 00:16:55.427 | 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 27], 95.00th=[ 34], 00:16:55.428 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:16:55.428 | 99.99th=[ 146] 00:16:55.428 write: IOPS=71, BW=9164KiB/s (9384kB/s)(80.0MiB/8939msec); 0 zone resets 00:16:55.428 slat (usec): min=42, max=2255, avg=138.61, stdev=191.10 00:16:55.428 clat (msec): min=48, max=366, avg=110.83, stdev=52.44 00:16:55.428 lat (msec): min=49, max=366, avg=110.97, stdev=52.45 00:16:55.428 clat percentiles (msec): 00:16:55.428 | 1.00th=[ 55], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 70], 00:16:55.428 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 95], 60.00th=[ 110], 00:16:55.428 | 70.00th=[ 126], 80.00th=[ 144], 90.00th=[ 176], 95.00th=[ 201], 00:16:55.428 | 99.00th=[ 313], 99.50th=[ 347], 99.90th=[ 368], 99.95th=[ 368], 00:16:55.428 | 99.99th=[ 368] 00:16:55.428 bw ( KiB/s): min= 1792, max=15360, per=0.80%, avg=8526.89, stdev=3618.96, samples=19 00:16:55.428 iops : min= 14, max= 120, avg=66.47, stdev=28.24, samples=19 00:16:55.428 lat (msec) : 10=7.59%, 20=23.75%, 50=10.98%, 100=31.34%, 250=24.38% 00:16:55.428 lat (msec) : 500=1.96% 00:16:55.428 cpu : usr=0.50%, sys=0.18%, ctx=1816, majf=0, minf=5 00:16:55.428 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.428 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.428 issued rwts: total=480,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.428 job93: (groupid=0, jobs=1): err= 0: pid=75141: Wed Jul 24 05:06:09 2024 00:16:55.428 read: IOPS=78, BW=9.84MiB/s (10.3MB/s)(80.0MiB/8126msec) 00:16:55.428 slat (usec): min=7, max=2485, avg=50.64, stdev=129.79 00:16:55.428 clat (msec): min=5, max=101, avg=20.13, stdev=13.55 00:16:55.428 lat (msec): min=5, max=101, avg=20.18, stdev=13.55 00:16:55.428 clat percentiles (msec): 00:16:55.428 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:16:55.428 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 19], 00:16:55.428 | 70.00th=[ 23], 80.00th=[ 27], 90.00th=[ 36], 95.00th=[ 43], 00:16:55.428 | 99.00th=[ 91], 99.50th=[ 96], 99.90th=[ 102], 99.95th=[ 102], 00:16:55.428 | 99.99th=[ 102] 00:16:55.428 write: IOPS=83, BW=10.5MiB/s (11.0MB/s)(88.0MiB/8415msec); 0 zone resets 00:16:55.428 slat (usec): min=49, max=5764, avg=149.03, stdev=380.40 00:16:55.428 clat (msec): min=44, max=325, avg=94.56, stdev=40.74 00:16:55.428 lat (msec): min=44, max=325, avg=94.71, stdev=40.73 00:16:55.428 clat percentiles (msec): 00:16:55.428 | 1.00th=[ 58], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 68], 00:16:55.428 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 87], 00:16:55.428 | 70.00th=[ 97], 80.00th=[ 115], 90.00th=[ 146], 95.00th=[ 178], 00:16:55.428 | 99.00th=[ 262], 99.50th=[ 275], 99.90th=[ 326], 99.95th=[ 326], 00:16:55.428 | 99.99th=[ 326] 00:16:55.428 bw ( KiB/s): min= 256, max=15104, per=0.88%, avg=9375.00, stdev=4494.28, samples=19 00:16:55.428 iops : min= 2, max= 118, avg=73.16, stdev=35.05, samples=19 00:16:55.428 lat (msec) : 10=7.14%, 20=23.14%, 50=16.00%, 100=39.66%, 250=13.32% 00:16:55.428 lat (msec) : 500=0.74% 00:16:55.428 cpu : usr=0.52%, sys=0.28%, ctx=2149, majf=0, minf=1 00:16:55.428 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.428 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.428 issued rwts: total=640,704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.428 job94: (groupid=0, jobs=1): err= 0: pid=75142: Wed Jul 24 05:06:09 2024 00:16:55.428 read: IOPS=75, BW=9624KiB/s (9855kB/s)(80.0MiB/8512msec) 00:16:55.428 slat (usec): min=8, max=1758, avg=63.96, stdev=129.70 00:16:55.428 clat (usec): min=8327, max=72330, avg=18838.75, stdev=8676.45 00:16:55.428 lat (usec): min=8605, max=72345, avg=18902.71, stdev=8682.78 00:16:55.428 clat percentiles (usec): 00:16:55.428 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[12125], 00:16:55.428 | 30.00th=[13698], 40.00th=[15270], 50.00th=[16909], 60.00th=[18744], 00:16:55.428 | 70.00th=[20317], 80.00th=[23987], 90.00th=[29754], 95.00th=[34866], 00:16:55.428 | 99.00th=[47449], 99.50th=[57410], 99.90th=[71828], 99.95th=[71828], 00:16:55.428 | 99.99th=[71828] 00:16:55.428 write: IOPS=86, BW=10.8MiB/s (11.3MB/s)(91.9MiB/8541msec); 0 zone resets 00:16:55.428 slat (usec): min=42, max=3401, avg=132.12, stdev=201.16 00:16:55.428 clat (msec): min=31, max=375, avg=92.16, stdev=46.02 00:16:55.428 lat (msec): min=31, max=375, avg=92.29, stdev=46.03 00:16:55.428 clat percentiles (msec): 00:16:55.428 | 1.00th=[ 37], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 66], 00:16:55.428 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 84], 00:16:55.428 | 70.00th=[ 92], 80.00th=[ 105], 90.00th=[ 142], 95.00th=[ 184], 00:16:55.428 | 99.00th=[ 300], 99.50th=[ 347], 99.90th=[ 376], 99.95th=[ 376], 00:16:55.428 | 99.99th=[ 376] 00:16:55.428 bw ( KiB/s): min= 504, max=14592, per=0.87%, avg=9310.85, stdev=4581.36, samples=20 00:16:55.428 iops : min= 3, max= 114, avg=72.55, stdev=35.91, samples=20 00:16:55.428 lat (msec) : 10=3.78%, 20=27.93%, 50=15.05%, 100=40.73%, 250=11.20% 00:16:55.428 lat (msec) : 500=1.31% 00:16:55.428 cpu : usr=0.53%, sys=0.31%, ctx=2215, majf=0, minf=5 00:16:55.428 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.428 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.428 issued rwts: total=640,735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.428 job95: (groupid=0, jobs=1): err= 0: pid=75143: Wed Jul 24 05:06:09 2024 00:16:55.428 read: IOPS=76, BW=9818KiB/s (10.1MB/s)(80.0MiB/8344msec) 00:16:55.428 slat (usec): min=7, max=1071, avg=51.22, stdev=97.50 00:16:55.428 clat (usec): min=4097, max=71101, avg=20561.06, stdev=10286.51 00:16:55.428 lat (usec): min=4143, max=71110, avg=20612.29, stdev=10295.78 00:16:55.428 clat percentiles (usec): 00:16:55.428 | 1.00th=[ 8356], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[12649], 00:16:55.428 | 30.00th=[14353], 40.00th=[16450], 50.00th=[18744], 60.00th=[20317], 00:16:55.428 | 70.00th=[22938], 80.00th=[25035], 90.00th=[34866], 95.00th=[41157], 00:16:55.428 | 99.00th=[55313], 99.50th=[56361], 99.90th=[70779], 99.95th=[70779], 00:16:55.428 | 99.99th=[70779] 00:16:55.428 write: IOPS=77, BW=9892KiB/s (10.1MB/s)(81.1MiB/8398msec); 0 zone resets 00:16:55.428 slat (usec): min=47, max=31704, avg=192.84, stdev=1268.27 00:16:55.428 clat (msec): min=52, max=379, avg=102.21, stdev=49.08 00:16:55.428 lat (msec): min=53, max=379, avg=102.40, stdev=49.05 00:16:55.428 clat percentiles (msec): 00:16:55.428 | 1.00th=[ 60], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 67], 00:16:55.428 | 30.00th=[ 70], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 95], 00:16:55.428 | 70.00th=[ 109], 80.00th=[ 134], 90.00th=[ 161], 95.00th=[ 197], 00:16:55.428 | 99.00th=[ 279], 99.50th=[ 317], 99.90th=[ 380], 99.95th=[ 380], 00:16:55.428 | 99.99th=[ 380] 00:16:55.428 bw ( KiB/s): min= 512, max=14848, per=0.77%, avg=8211.60, stdev=4691.81, samples=20 00:16:55.428 iops : min= 4, max= 116, avg=64.05, stdev=36.61, samples=20 00:16:55.428 lat (msec) : 10=5.20%, 20=22.81%, 50=20.17%, 100=33.98%, 250=16.45% 00:16:55.428 lat (msec) : 500=1.40% 00:16:55.428 cpu : usr=0.51%, sys=0.27%, ctx=2082, majf=0, minf=6 00:16:55.428 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.428 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.428 issued rwts: total=640,649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.428 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.428 job96: (groupid=0, jobs=1): err= 0: pid=75144: Wed Jul 24 05:06:09 2024 00:16:55.428 read: IOPS=73, BW=9428KiB/s (9654kB/s)(80.0MiB/8689msec) 00:16:55.428 slat (usec): min=7, max=1450, avg=56.26, stdev=116.07 00:16:55.428 clat (usec): min=5938, max=65351, avg=16575.21, stdev=7709.42 00:16:55.428 lat (usec): min=5977, max=65360, avg=16631.47, stdev=7694.29 00:16:55.428 clat percentiles (usec): 00:16:55.428 | 1.00th=[ 7242], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10814], 00:16:55.428 | 30.00th=[11731], 40.00th=[13698], 50.00th=[14746], 60.00th=[16319], 00:16:55.428 | 70.00th=[17957], 80.00th=[20317], 90.00th=[25560], 95.00th=[30802], 00:16:55.428 | 99.00th=[49021], 99.50th=[52691], 99.90th=[65274], 99.95th=[65274], 00:16:55.428 | 99.99th=[65274] 00:16:55.428 write: IOPS=85, BW=10.7MiB/s (11.2MB/s)(93.5MiB/8718msec); 0 zone resets 00:16:55.428 slat (usec): min=42, max=2381, avg=122.02, stdev=166.50 00:16:55.428 clat (msec): min=17, max=317, avg=92.64, stdev=45.50 00:16:55.428 lat (msec): min=17, max=317, avg=92.76, stdev=45.49 00:16:55.428 clat percentiles (msec): 00:16:55.428 | 1.00th=[ 24], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 66], 00:16:55.428 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 77], 60.00th=[ 83], 00:16:55.429 | 70.00th=[ 91], 80.00th=[ 109], 90.00th=[ 150], 95.00th=[ 174], 00:16:55.429 | 99.00th=[ 292], 99.50th=[ 305], 99.90th=[ 317], 99.95th=[ 317], 00:16:55.429 | 99.99th=[ 317] 00:16:55.429 bw ( KiB/s): min= 512, max=15616, per=0.89%, avg=9471.15, stdev=4757.52, samples=20 00:16:55.429 iops : min= 4, max= 122, avg=73.80, stdev=37.22, samples=20 00:16:55.429 lat (msec) : 10=5.76%, 20=30.91%, 50=10.16%, 100=40.27%, 250=11.31% 00:16:55.429 lat (msec) : 500=1.59% 00:16:55.429 cpu : usr=0.60%, sys=0.25%, ctx=2176, majf=0, minf=7 00:16:55.429 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.429 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.429 issued rwts: total=640,748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.429 job97: (groupid=0, jobs=1): err= 0: pid=75145: Wed Jul 24 05:06:09 2024 00:16:55.429 read: IOPS=76, BW=9826KiB/s (10.1MB/s)(80.0MiB/8337msec) 00:16:55.429 slat (usec): min=8, max=861, avg=36.79, stdev=65.48 00:16:55.429 clat (usec): min=8140, max=48208, avg=17030.73, stdev=7449.98 00:16:55.429 lat (usec): min=8207, max=48405, avg=17067.52, stdev=7456.22 00:16:55.429 clat percentiles (usec): 00:16:55.429 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10552], 00:16:55.429 | 30.00th=[11469], 40.00th=[13304], 50.00th=[15270], 60.00th=[17433], 00:16:55.429 | 70.00th=[19792], 80.00th=[21890], 90.00th=[27657], 95.00th=[32900], 00:16:55.429 | 99.00th=[40109], 99.50th=[41157], 99.90th=[47973], 99.95th=[47973], 00:16:55.429 | 99.99th=[47973] 00:16:55.429 write: IOPS=82, BW=10.3MiB/s (10.8MB/s)(89.4MiB/8699msec); 0 zone resets 00:16:55.429 slat (usec): min=36, max=20221, avg=168.88, stdev=780.25 00:16:55.429 clat (msec): min=39, max=395, avg=96.10, stdev=50.50 00:16:55.429 lat (msec): min=39, max=395, avg=96.27, stdev=50.48 00:16:55.429 clat percentiles (msec): 00:16:55.429 | 1.00th=[ 43], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 68], 00:16:55.429 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 87], 00:16:55.429 | 70.00th=[ 95], 80.00th=[ 107], 90.00th=[ 142], 95.00th=[ 205], 00:16:55.429 | 99.00th=[ 305], 99.50th=[ 359], 99.90th=[ 397], 99.95th=[ 397], 00:16:55.429 | 99.99th=[ 397] 00:16:55.429 bw ( KiB/s): min= 1024, max=15329, per=0.85%, avg=9041.75, stdev=4687.76, samples=20 00:16:55.429 iops : min= 8, max= 119, avg=70.35, stdev=36.62, samples=20 00:16:55.429 lat (msec) : 10=6.13%, 20=27.82%, 50=13.87%, 100=39.41%, 250=11.14% 00:16:55.429 lat (msec) : 500=1.62% 00:16:55.429 cpu : usr=0.54%, sys=0.26%, ctx=2164, majf=0, minf=3 00:16:55.429 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.429 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.429 issued rwts: total=640,715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.429 job98: (groupid=0, jobs=1): err= 0: pid=75146: Wed Jul 24 05:06:09 2024 00:16:55.429 read: IOPS=73, BW=9414KiB/s (9640kB/s)(80.0MiB/8702msec) 00:16:55.429 slat (usec): min=8, max=1721, avg=63.01, stdev=136.68 00:16:55.429 clat (usec): min=3932, max=40476, avg=11738.70, stdev=5591.13 00:16:55.429 lat (usec): min=3946, max=40493, avg=11801.71, stdev=5589.52 00:16:55.429 clat percentiles (usec): 00:16:55.429 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 5800], 20.00th=[ 6849], 00:16:55.429 | 30.00th=[ 7832], 40.00th=[ 9503], 50.00th=[11207], 60.00th=[12125], 00:16:55.429 | 70.00th=[13698], 80.00th=[15270], 90.00th=[17695], 95.00th=[21890], 00:16:55.429 | 99.00th=[31851], 99.50th=[35390], 99.90th=[40633], 99.95th=[40633], 00:16:55.429 | 99.99th=[40633] 00:16:55.429 write: IOPS=84, BW=10.6MiB/s (11.1MB/s)(96.2MiB/9114msec); 0 zone resets 00:16:55.429 slat (usec): min=43, max=2643, avg=141.08, stdev=202.66 00:16:55.429 clat (msec): min=9, max=293, avg=94.12, stdev=40.59 00:16:55.429 lat (msec): min=9, max=294, avg=94.26, stdev=40.61 00:16:55.429 clat percentiles (msec): 00:16:55.429 | 1.00th=[ 12], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 67], 00:16:55.429 | 30.00th=[ 71], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 86], 00:16:55.429 | 70.00th=[ 97], 80.00th=[ 121], 90.00th=[ 157], 95.00th=[ 180], 00:16:55.429 | 99.00th=[ 230], 99.50th=[ 243], 99.90th=[ 296], 99.95th=[ 296], 00:16:55.429 | 99.99th=[ 296] 00:16:55.429 bw ( KiB/s): min= 2048, max=17664, per=0.91%, avg=9738.30, stdev=4268.11, samples=20 00:16:55.429 iops : min= 16, max= 138, avg=75.85, stdev=33.39, samples=20 00:16:55.429 lat (msec) : 4=0.07%, 10=19.43%, 20=23.83%, 50=3.76%, 100=37.23% 00:16:55.429 lat (msec) : 250=15.53%, 500=0.14% 00:16:55.429 cpu : usr=0.59%, sys=0.25%, ctx=2282, majf=0, minf=3 00:16:55.429 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.429 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.429 issued rwts: total=640,770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.429 job99: (groupid=0, jobs=1): err= 0: pid=75147: Wed Jul 24 05:06:09 2024 00:16:55.429 read: IOPS=67, BW=8632KiB/s (8839kB/s)(60.0MiB/7118msec) 00:16:55.429 slat (usec): min=7, max=894, avg=44.94, stdev=82.34 00:16:55.429 clat (msec): min=3, max=195, avg=20.85, stdev=29.63 00:16:55.429 lat (msec): min=3, max=195, avg=20.90, stdev=29.63 00:16:55.429 clat percentiles (msec): 00:16:55.429 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:16:55.429 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 12], 60.00th=[ 14], 00:16:55.429 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 33], 95.00th=[ 99], 00:16:55.429 | 99.00th=[ 150], 99.50th=[ 192], 99.90th=[ 197], 99.95th=[ 197], 00:16:55.429 | 99.99th=[ 197] 00:16:55.429 write: IOPS=68, BW=8766KiB/s (8976kB/s)(75.4MiB/8805msec); 0 zone resets 00:16:55.429 slat (usec): min=43, max=31861, avg=190.61, stdev=1310.18 00:16:55.429 clat (msec): min=58, max=468, avg=115.84, stdev=57.74 00:16:55.429 lat (msec): min=58, max=468, avg=116.03, stdev=57.70 00:16:55.429 clat percentiles (msec): 00:16:55.429 | 1.00th=[ 62], 5.00th=[ 65], 10.00th=[ 70], 20.00th=[ 75], 00:16:55.429 | 30.00th=[ 82], 40.00th=[ 90], 50.00th=[ 101], 60.00th=[ 113], 00:16:55.429 | 70.00th=[ 131], 80.00th=[ 146], 90.00th=[ 176], 95.00th=[ 190], 00:16:55.429 | 99.00th=[ 393], 99.50th=[ 414], 99.90th=[ 468], 99.95th=[ 468], 00:16:55.429 | 99.99th=[ 468] 00:16:55.429 bw ( KiB/s): min= 1795, max=13312, per=0.75%, avg=8024.16, stdev=3331.16, samples=19 00:16:55.429 iops : min= 14, max= 104, avg=62.63, stdev=26.05, samples=19 00:16:55.429 lat (msec) : 4=0.09%, 10=17.82%, 20=16.71%, 50=6.00%, 100=29.55% 00:16:55.429 lat (msec) : 250=28.35%, 500=1.48% 00:16:55.429 cpu : usr=0.50%, sys=0.15%, ctx=1812, majf=0, minf=7 00:16:55.429 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.429 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.429 issued rwts: total=480,603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.429 00:16:55.429 Run status group 0 (all jobs): 00:16:55.429 READ: bw=910MiB/s (955MB/s), 7591KiB/s-13.9MiB/s (7773kB/s-14.5MB/s), io=8503MiB (8916MB), run=7118-9340msec 00:16:55.429 WRITE: bw=1043MiB/s (1094MB/s), 8766KiB/s-16.2MiB/s (8976kB/s-17.0MB/s), io=9622MiB (10.1GB), run=8025-9226msec 00:16:55.429 00:16:55.429 Disk stats (read/write): 00:16:55.429 sdc: ios=676/799, merge=0/0, ticks=6711/71810, in_queue=78521, util=71.64% 00:16:55.429 sde: ios=642/776, merge=0/0, ticks=6184/70289, in_queue=76474, util=70.91% 00:16:55.429 sdh: ios=642/762, merge=0/0, ticks=6793/71146, in_queue=77940, util=72.29% 00:16:55.429 sdj: ios=515/640, merge=0/0, ticks=13450/61582, in_queue=75032, util=73.21% 00:16:55.429 sdp: ios=641/679, merge=0/0, ticks=12696/63730, in_queue=76427, util=74.08% 00:16:55.429 sdt: ios=480/615, merge=0/0, ticks=10519/67426, in_queue=77946, util=75.28% 00:16:55.429 sdy: ios=642/759, merge=0/0, ticks=7680/70332, in_queue=78013, util=75.31% 00:16:55.429 sdab: ios=642/718, merge=0/0, ticks=13365/64878, in_queue=78244, util=75.75% 00:16:55.429 sdad: ios=640/640, merge=0/0, ticks=9797/67987, in_queue=77784, util=75.67% 00:16:55.429 sdaf: ios=641/716, merge=0/0, ticks=12537/64666, in_queue=77203, util=75.90% 00:16:55.429 sdf: ios=485/640, merge=0/0, ticks=8574/68556, in_queue=77130, util=75.39% 00:16:55.429 sdm: ios=480/600, merge=0/0, ticks=15658/62398, in_queue=78057, util=76.03% 00:16:55.429 sdr: ios=641/732, merge=0/0, ticks=13265/63499, in_queue=76764, util=76.36% 00:16:55.429 sdv: ios=641/734, merge=0/0, ticks=12640/64251, in_queue=76892, util=76.91% 00:16:55.429 sdaa: ios=641/688, merge=0/0, ticks=9979/66827, in_queue=76807, util=77.13% 00:16:55.429 sdae: ios=480/596, merge=0/0, ticks=14159/64153, in_queue=78313, util=77.02% 00:16:55.429 sdag: ios=642/736, merge=0/0, ticks=9921/67700, in_queue=77621, util=77.43% 00:16:55.429 sdah: ios=641/757, merge=0/0, ticks=10433/66509, in_queue=76943, util=77.51% 00:16:55.429 sdak: ios=641/706, merge=0/0, ticks=13845/62543, in_queue=76389, util=77.70% 00:16:55.429 sdam: ios=640/680, merge=0/0, ticks=8697/68039, in_queue=76736, util=77.58% 00:16:55.429 sdk: ios=828/960, merge=0/0, ticks=10176/67282, in_queue=77458, util=77.39% 00:16:55.429 sdo: ios=999/1102, merge=0/0, ticks=9044/67292, in_queue=76336, util=77.46% 00:16:55.429 sds: ios=962/960, merge=0/0, ticks=11915/65434, in_queue=77350, util=78.05% 00:16:55.429 sdw: ios=998/996, merge=0/0, ticks=14676/62958, in_queue=77634, util=77.74% 00:16:55.429 sdz: ios=996/966, merge=0/0, ticks=14647/62754, in_queue=77402, util=78.16% 00:16:55.429 sdac: ios=1000/1097, merge=0/0, ticks=10591/66964, in_queue=77555, util=78.43% 00:16:55.429 sdai: ios=996/1029, merge=0/0, ticks=12690/64671, in_queue=77362, util=78.58% 00:16:55.429 sdaj: ios=996/1113, merge=0/0, ticks=10022/68102, in_queue=78125, util=79.03% 00:16:55.429 sdal: ios=962/1053, merge=0/0, ticks=11673/65365, in_queue=77039, util=78.96% 00:16:55.429 sdaq: ios=801/946, merge=0/0, ticks=11627/66345, in_queue=77972, util=79.01% 00:16:55.429 sdan: ios=614/640, merge=0/0, ticks=10604/67560, in_queue=78164, util=79.23% 00:16:55.429 sdao: ios=642/751, merge=0/0, ticks=11897/66180, in_queue=78078, util=80.06% 00:16:55.429 sdap: ios=640/643, merge=0/0, ticks=9077/68116, in_queue=77193, util=79.94% 00:16:55.429 sdar: ios=641/681, merge=0/0, ticks=12337/64736, in_queue=77074, util=80.39% 00:16:55.429 sdas: ios=480/626, merge=0/0, ticks=7737/69971, in_queue=77708, util=80.43% 00:16:55.429 sdat: ios=641/718, merge=0/0, ticks=12371/64160, in_queue=76532, util=80.55% 00:16:55.430 sdau: ios=641/731, merge=0/0, ticks=11605/65532, in_queue=77138, util=80.82% 00:16:55.430 sdav: ios=641/713, merge=0/0, ticks=12103/64944, in_queue=77048, util=80.85% 00:16:55.430 sdaw: ios=642/723, merge=0/0, ticks=9763/67807, in_queue=77571, util=81.57% 00:16:55.430 sday: ios=642/740, merge=0/0, ticks=10972/66228, in_queue=77201, util=82.12% 00:16:55.430 sdax: ios=641/657, merge=0/0, ticks=8811/68267, in_queue=77078, util=82.04% 00:16:55.430 sdaz: ios=642/753, merge=0/0, ticks=9432/68344, in_queue=77776, util=82.25% 00:16:55.430 sdba: ios=480/638, merge=0/0, ticks=6971/70456, in_queue=77427, util=82.10% 00:16:55.430 sdbb: ios=642/671, merge=0/0, ticks=10857/66092, in_queue=76949, util=81.94% 00:16:55.430 sdbc: ios=642/753, merge=0/0, ticks=9535/68301, in_queue=77837, util=82.82% 00:16:55.430 sdbd: ios=640/704, merge=0/0, ticks=11432/65721, in_queue=77154, util=82.89% 00:16:55.430 sdbe: ios=642/759, merge=0/0, ticks=9567/67375, in_queue=76943, util=83.10% 00:16:55.430 sdbf: ios=566/640, merge=0/0, ticks=9793/68005, in_queue=77799, util=83.60% 00:16:55.430 sdbh: ios=641/749, merge=0/0, ticks=9245/68094, in_queue=77340, util=83.87% 00:16:55.430 sdbj: ios=642/789, merge=0/0, ticks=9386/68092, in_queue=77478, util=84.42% 00:16:55.430 sdbg: ios=642/735, merge=0/0, ticks=10662/66873, in_queue=77535, util=84.57% 00:16:55.430 sdbi: ios=642/703, merge=0/0, ticks=10512/66189, in_queue=76701, util=84.07% 00:16:55.430 sdbk: ios=480/583, merge=0/0, ticks=12229/65573, in_queue=77803, util=84.26% 00:16:55.430 sdbl: ios=640/655, merge=0/0, ticks=9459/68380, in_queue=77840, util=84.69% 00:16:55.430 sdbm: ios=641/642, merge=0/0, ticks=11190/65862, in_queue=77052, util=85.22% 00:16:55.430 sdbn: ios=641/640, merge=0/0, ticks=8632/69204, in_queue=77836, util=85.25% 00:16:55.430 sdbo: ios=642/735, merge=0/0, ticks=11005/64821, in_queue=75827, util=84.44% 00:16:55.430 sdbp: ios=642/724, merge=0/0, ticks=12426/64722, in_queue=77149, util=86.32% 00:16:55.430 sdbq: ios=641/724, merge=0/0, ticks=11503/64806, in_queue=76309, util=86.18% 00:16:55.430 sdbr: ios=642/739, merge=0/0, ticks=10733/67238, in_queue=77971, util=87.03% 00:16:55.430 sdbs: ios=962/1092, merge=0/0, ticks=9863/66882, in_queue=76746, util=86.78% 00:16:55.430 sdbt: ios=802/951, merge=0/0, ticks=10437/65283, in_queue=75721, util=86.70% 00:16:55.430 sdbu: ios=996/1082, merge=0/0, ticks=12389/65220, in_queue=77609, util=86.95% 00:16:55.430 sdbv: ios=962/1053, merge=0/0, ticks=10021/66690, in_queue=76712, util=87.12% 00:16:55.430 sdby: ios=802/957, merge=0/0, ticks=11110/66552, in_queue=77663, util=87.53% 00:16:55.430 sdcc: ios=962/976, merge=0/0, ticks=12474/64481, in_queue=76956, util=87.84% 00:16:55.430 sdcg: ios=995/1094, merge=0/0, ticks=12246/65418, in_queue=77665, util=88.14% 00:16:55.430 sdci: ios=902/960, merge=0/0, ticks=10154/67721, in_queue=77876, util=88.04% 00:16:55.430 sdcl: ios=962/1056, merge=0/0, ticks=8839/68285, in_queue=77124, util=88.19% 00:16:55.430 sdcn: ios=962/982, merge=0/0, ticks=9923/67680, in_queue=77604, util=88.62% 00:16:55.430 sdbx: ios=641/734, merge=0/0, ticks=12345/64671, in_queue=77017, util=88.90% 00:16:55.430 sdbz: ios=641/750, merge=0/0, ticks=11386/66185, in_queue=77572, util=89.27% 00:16:55.430 sdcb: ios=640/712, merge=0/0, ticks=9801/67261, in_queue=77062, util=89.57% 00:16:55.430 sdce: ios=641/662, merge=0/0, ticks=10398/66538, in_queue=76937, util=90.04% 00:16:55.430 sdcj: ios=480/636, merge=0/0, ticks=7803/68804, in_queue=76608, util=89.81% 00:16:55.430 sdcm: ios=642/725, merge=0/0, ticks=13910/63800, in_queue=77711, util=90.77% 00:16:55.430 sdcp: ios=641/736, merge=0/0, ticks=10809/66323, in_queue=77133, util=90.94% 00:16:55.430 sdcs: ios=480/604, merge=0/0, ticks=9983/68367, in_queue=78350, util=91.37% 00:16:55.430 sdcu: ios=641/681, merge=0/0, ticks=12418/64426, in_queue=76845, util=91.64% 00:16:55.430 sdcv: ios=641/697, merge=0/0, ticks=11378/66061, in_queue=77440, util=92.29% 00:16:55.430 sdbw: ios=640/663, merge=0/0, ticks=7576/69925, in_queue=77502, util=92.05% 00:16:55.430 sdca: ios=640/661, merge=0/0, ticks=8182/69803, in_queue=77985, util=92.71% 00:16:55.430 sdcd: ios=640/727, merge=0/0, ticks=8584/68851, in_queue=77435, util=93.14% 00:16:55.430 sdcf: ios=640/693, merge=0/0, ticks=8655/68256, in_queue=76912, util=93.52% 00:16:55.430 sdch: ios=642/759, merge=0/0, ticks=15579/62000, in_queue=77580, util=94.57% 00:16:55.430 sdck: ios=641/715, merge=0/0, ticks=8245/68698, in_queue=76943, util=94.26% 00:16:55.430 sdco: ios=641/740, merge=0/0, ticks=10509/66746, in_queue=77256, util=94.69% 00:16:55.430 sdcq: ios=641/751, merge=0/0, ticks=11823/64976, in_queue=76799, util=94.85% 00:16:55.430 sdcr: ios=640/743, merge=0/0, ticks=7768/69689, in_queue=77458, util=95.19% 00:16:55.430 sdct: ios=480/640, merge=0/0, ticks=6188/71943, in_queue=78132, util=95.57% 00:16:55.430 sda: ios=641/666, merge=0/0, ticks=11577/64724, in_queue=76302, util=95.79% 00:16:55.430 sdb: ios=642/748, merge=0/0, ticks=8239/70120, in_queue=78359, util=96.40% 00:16:55.430 sdd: ios=480/636, merge=0/0, ticks=8537/68668, in_queue=77205, util=96.21% 00:16:55.430 sdg: ios=641/693, merge=0/0, ticks=12749/64151, in_queue=76901, util=96.29% 00:16:55.430 sdi: ios=641/728, merge=0/0, ticks=11828/66066, in_queue=77895, util=96.68% 00:16:55.430 sdl: ios=641/642, merge=0/0, ticks=12800/63870, in_queue=76671, util=96.83% 00:16:55.430 sdn: ios=642/741, merge=0/0, ticks=10423/67716, in_queue=78140, util=97.79% 00:16:55.430 sdq: ios=641/708, merge=0/0, ticks=10686/66525, in_queue=77211, util=98.17% 00:16:55.430 sdu: ios=642/766, merge=0/0, ticks=7293/71288, in_queue=78582, util=98.28% 00:16:55.430 sdx: ios=480/597, merge=0/0, ticks=9900/68485, in_queue=78386, util=98.25% 00:16:55.430 [2024-07-24 05:06:09.921377] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.926847] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.928762] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.930698] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.932687] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.934877] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.937295] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.939996] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.942824] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.945327] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.947680] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.949954] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.952060] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.955589] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 05:06:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@78 -- # timing_exit fio 00:16:55.430 05:06:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:55.430 05:06:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:55.430 [2024-07-24 05:06:09.958181] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.960146] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.962440] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.964746] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.966806] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.970078] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.972303] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.974344] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.976680] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.978722] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.981336] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.985396] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 05:06:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@80 -- # rm -f ./local-job0-0-verify.state 00:16:55.430 [2024-07-24 05:06:09.988283] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 05:06:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:16:55.430 05:06:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@83 -- # rm -f 00:16:55.430 [2024-07-24 05:06:09.990529] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 05:06:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@84 -- # iscsicleanup 00:16:55.430 05:06:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:16:55.430 Cleaning up iSCSI connection 00:16:55.430 05:06:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:16:55.430 [2024-07-24 05:06:09.992661] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.994670] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.996684] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:09.998690] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:10.001530] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:10.007672] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:10.010761] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:10.014479] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:10.016985] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:10.020497] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:10.023188] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:10.025318] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.430 [2024-07-24 05:06:10.027961] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.031167] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.033751] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.036590] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.041840] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.043955] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.046395] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.051175] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.053746] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.056437] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.059782] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.064320] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.068996] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.072605] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.078319] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.082907] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.090039] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.094338] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.100281] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.106326] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.114658] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.117072] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.120085] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.123006] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.132105] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.134834] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.138306] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.175637] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.182081] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.185157] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.187124] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.192479] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.194987] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.197162] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.199660] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:55.690 [2024-07-24 05:06:10.203807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:56.625 Logging out of session [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:16:56.625 Logging out of session [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:16:56.625 Logging out of session [sid: 12, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:16:56.625 Logging out of session [sid: 13, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:16:56.625 Logging out of session [sid: 14, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:16:56.625 Logging out of session [sid: 15, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:16:56.625 Logging out of session [sid: 16, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:16:56.625 Logging out of session [sid: 17, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:16:56.625 Logging out of session [sid: 18, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:16:56.625 Logging out of session [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:16:56.625 Logout of [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:16:56.625 Logout of [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:16:56.625 Logout of [sid: 12, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:16:56.625 Logout of [sid: 13, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:16:56.625 Logout of [sid: 14, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:16:56.625 Logout of [sid: 15, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:16:56.625 Logout of [sid: 16, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:16:56.625 Logout of [sid: 17, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:16:56.625 Logout of [sid: 18, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:16:56.625 Logout of [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:16:56.625 05:06:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:16:56.625 05:06:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@983 -- # rm -rf 00:16:56.625 05:06:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@85 -- # killprocess 72104 00:16:56.625 05:06:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@948 -- # '[' -z 72104 ']' 00:16:56.625 05:06:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@952 -- # kill -0 72104 00:16:56.625 05:06:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@953 -- # uname 00:16:56.625 05:06:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:56.625 05:06:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72104 00:16:56.625 05:06:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:56.625 05:06:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:56.625 killing process with pid 72104 00:16:56.625 05:06:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72104' 00:16:56.625 05:06:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@967 -- # kill 72104 00:16:56.625 05:06:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@972 -- # wait 72104 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@86 -- # iscsitestfini 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:17:04.733 00:17:04.733 real 1m2.886s 00:17:04.733 user 4m13.617s 00:17:04.733 sys 0m26.934s 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:04.733 ************************************ 00:17:04.733 END TEST iscsi_tgt_iscsi_lvol 00:17:04.733 ************************************ 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:04.733 05:06:18 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@37 -- # run_test iscsi_tgt_fio /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:17:04.733 05:06:18 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:04.733 05:06:18 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.733 05:06:18 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:17:04.733 ************************************ 00:17:04.733 START TEST iscsi_tgt_fio 00:17:04.733 ************************************ 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:17:04.733 * Looking for test storage... 00:17:04.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@11 -- # iscsitestinit 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@48 -- # '[' -z 10.0.0.1 ']' 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@53 -- # '[' -z 10.0.0.2 ']' 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@58 -- # MALLOC_BDEV_SIZE=64 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@59 -- # MALLOC_BLOCK_SIZE=4096 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@60 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@61 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@63 -- # timing_enter start_iscsi_tgt 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@66 -- # pid=76779 00:17:04.733 Process pid: 76779 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@67 -- # echo 'Process pid: 76779' 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@69 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@71 -- # waitforlisten 76779 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@829 -- # '[' -z 76779 ']' 00:17:04.733 05:06:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.734 05:06:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.734 05:06:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.734 05:06:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.734 05:06:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@65 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:17:04.734 05:06:18 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:17:04.734 [2024-07-24 05:06:18.704195] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:17:04.734 [2024-07-24 05:06:18.704360] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76779 ] 00:17:04.734 [2024-07-24 05:06:18.885905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.734 [2024-07-24 05:06:19.099728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.992 05:06:19 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.992 05:06:19 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@862 -- # return 0 00:17:04.992 05:06:19 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:05.560 [2024-07-24 05:06:20.055501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:06.127 iscsi_tgt is listening. Running tests... 00:17:06.127 05:06:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@75 -- # echo 'iscsi_tgt is listening. Running tests...' 00:17:06.127 05:06:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@77 -- # timing_exit start_iscsi_tgt 00:17:06.127 05:06:20 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:06.127 05:06:20 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:17:06.127 05:06:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:17:06.385 05:06:20 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:17:06.642 05:06:21 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:17:07.208 05:06:21 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # malloc_bdevs='Malloc0 ' 00:17:07.208 05:06:21 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:17:07.465 05:06:21 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # malloc_bdevs+=Malloc1 00:17:07.465 05:06:21 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:07.465 05:06:22 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 1024 512 00:17:09.369 05:06:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # bdev=Malloc2 00:17:09.369 05:06:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias 'raid0:0 Malloc2:1' 1:2 64 -d 00:17:09.369 05:06:23 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@91 -- # sleep 1 00:17:10.302 05:06:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@93 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:17:10.302 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:17:10.302 05:06:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@94 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:17:10.302 [2024-07-24 05:06:24.761712] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:10.302 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:17:10.302 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:17:10.302 05:06:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@95 -- # waitforiscsidevices 2 00:17:10.302 05:06:24 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@116 -- # local num=2 00:17:10.302 05:06:24 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:17:10.302 05:06:24 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:17:10.302 05:06:24 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:17:10.302 05:06:24 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:17:10.302 [2024-07-24 05:06:24.774153] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:10.302 05:06:24 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # n=2 00:17:10.302 05:06:24 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@120 -- # '[' 2 -ne 2 ']' 00:17:10.302 05:06:24 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@123 -- # return 0 00:17:10.302 05:06:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@97 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; delete_tmp_files; exit 1' SIGINT SIGTERM EXIT 00:17:10.302 05:06:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:17:10.302 [global] 00:17:10.302 thread=1 00:17:10.302 invalidate=1 00:17:10.302 rw=randrw 00:17:10.302 time_based=1 00:17:10.302 runtime=1 00:17:10.302 ioengine=libaio 00:17:10.302 direct=1 00:17:10.302 bs=4096 00:17:10.302 iodepth=1 00:17:10.302 norandommap=0 00:17:10.302 numjobs=1 00:17:10.302 00:17:10.302 verify_dump=1 00:17:10.302 verify_backlog=512 00:17:10.302 verify_state_save=0 00:17:10.302 do_verify=1 00:17:10.302 verify=crc32c-intel 00:17:10.302 [job0] 00:17:10.302 filename=/dev/sda 00:17:10.302 [job1] 00:17:10.302 filename=/dev/sdb 00:17:10.302 queue_depth set to 113 (sda) 00:17:10.302 queue_depth set to 113 (sdb) 00:17:10.560 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:10.560 job1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:10.560 fio-3.35 00:17:10.560 Starting 2 threads 00:17:10.560 [2024-07-24 05:06:25.001419] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:10.560 [2024-07-24 05:06:25.005049] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:11.493 [2024-07-24 05:06:26.117233] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:11.493 [2024-07-24 05:06:26.118987] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:11.751 00:17:11.751 job0: (groupid=0, jobs=1): err= 0: pid=76935: Wed Jul 24 05:06:26 2024 00:17:11.751 read: IOPS=6201, BW=24.2MiB/s (25.4MB/s)(24.2MiB/1000msec) 00:17:11.751 slat (nsec): min=2954, max=51608, avg=5848.91, stdev=1709.11 00:17:11.751 clat (usec): min=60, max=755, avg=98.11, stdev=24.29 00:17:11.751 lat (usec): min=65, max=760, avg=103.96, stdev=24.65 00:17:11.751 clat percentiles (usec): 00:17:11.751 | 1.00th=[ 81], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 86], 00:17:11.751 | 30.00th=[ 87], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 94], 00:17:11.751 | 70.00th=[ 98], 80.00th=[ 103], 90.00th=[ 131], 95.00th=[ 141], 00:17:11.751 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 326], 99.95th=[ 371], 00:17:11.751 | 99.99th=[ 758] 00:17:11.751 bw ( KiB/s): min=13128, max=13128, per=26.56%, avg=13128.00, stdev= 0.00, samples=1 00:17:11.751 iops : min= 3282, max= 3282, avg=3282.00, stdev= 0.00, samples=1 00:17:11.751 write: IOPS=3270, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1000msec); 0 zone resets 00:17:11.751 slat (nsec): min=3919, max=30953, avg=7009.93, stdev=2240.81 00:17:11.751 clat (usec): min=63, max=359, avg=99.34, stdev=26.62 00:17:11.751 lat (usec): min=69, max=383, avg=106.35, stdev=27.57 00:17:11.751 clat percentiles (usec): 00:17:11.751 | 1.00th=[ 81], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 87], 00:17:11.751 | 30.00th=[ 88], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 95], 00:17:11.751 | 70.00th=[ 99], 80.00th=[ 104], 90.00th=[ 115], 95.00th=[ 141], 00:17:11.751 | 99.00th=[ 204], 99.50th=[ 306], 99.90th=[ 351], 99.95th=[ 359], 00:17:11.751 | 99.99th=[ 359] 00:17:11.751 bw ( KiB/s): min=13592, max=13592, per=52.12%, avg=13592.00, stdev= 0.00, samples=1 00:17:11.751 iops : min= 3398, max= 3398, avg=3398.00, stdev= 0.00, samples=1 00:17:11.751 lat (usec) : 100=74.60%, 250=24.88%, 500=0.50%, 750=0.02%, 1000=0.01% 00:17:11.751 cpu : usr=3.20%, sys=7.50%, ctx=9471, majf=0, minf=7 00:17:11.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:11.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.751 issued rwts: total=6201,3270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.751 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:11.751 job1: (groupid=0, jobs=1): err= 0: pid=76936: Wed Jul 24 05:06:26 2024 00:17:11.751 read: IOPS=6156, BW=24.0MiB/s (25.2MB/s)(24.0MiB/1000msec) 00:17:11.751 slat (nsec): min=2753, max=68447, avg=3911.12, stdev=2009.43 00:17:11.751 clat (usec): min=51, max=709, avg=99.42, stdev=23.71 00:17:11.751 lat (usec): min=57, max=712, avg=103.34, stdev=23.99 00:17:11.751 clat percentiles (usec): 00:17:11.751 | 1.00th=[ 83], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 87], 00:17:11.751 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 93], 60.00th=[ 95], 00:17:11.751 | 70.00th=[ 98], 80.00th=[ 103], 90.00th=[ 133], 95.00th=[ 141], 00:17:11.751 | 99.00th=[ 169], 99.50th=[ 196], 99.90th=[ 322], 99.95th=[ 359], 00:17:11.751 | 99.99th=[ 709] 00:17:11.751 bw ( KiB/s): min=12696, max=12696, per=25.69%, avg=12696.00, stdev= 0.00, samples=1 00:17:11.752 iops : min= 3174, max= 3174, avg=3174.00, stdev= 0.00, samples=1 00:17:11.752 write: IOPS=3250, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1000msec); 0 zone resets 00:17:11.752 slat (nsec): min=3619, max=43066, avg=5258.21, stdev=2866.88 00:17:11.752 clat (usec): min=72, max=918, avg=104.43, stdev=30.23 00:17:11.752 lat (usec): min=82, max=929, avg=109.69, stdev=31.01 00:17:11.752 clat percentiles (usec): 00:17:11.752 | 1.00th=[ 86], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 92], 00:17:11.752 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 99], 00:17:11.752 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 122], 95.00th=[ 151], 00:17:11.752 | 99.00th=[ 192], 99.50th=[ 314], 99.90th=[ 367], 99.95th=[ 465], 00:17:11.752 | 99.99th=[ 922] 00:17:11.752 bw ( KiB/s): min=13608, max=13608, per=52.18%, avg=13608.00, stdev= 0.00, samples=1 00:17:11.752 iops : min= 3402, max= 3402, avg=3402.00, stdev= 0.00, samples=1 00:17:11.752 lat (usec) : 100=71.03%, 250=28.43%, 500=0.52%, 750=0.01%, 1000=0.01% 00:17:11.752 cpu : usr=3.90%, sys=5.60%, ctx=9406, majf=0, minf=7 00:17:11.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:11.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.752 issued rwts: total=6156,3250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:11.752 00:17:11.752 Run status group 0 (all jobs): 00:17:11.752 READ: bw=48.3MiB/s (50.6MB/s), 24.0MiB/s-24.2MiB/s (25.2MB/s-25.4MB/s), io=48.3MiB (50.6MB), run=1000-1000msec 00:17:11.752 WRITE: bw=25.5MiB/s (26.7MB/s), 12.7MiB/s-12.8MiB/s (13.3MB/s-13.4MB/s), io=25.5MiB (26.7MB), run=1000-1000msec 00:17:11.752 00:17:11.752 Disk stats (read/write): 00:17:11.752 sda: ios=5636/3072, merge=0/0, ticks=529/294, in_queue=823, util=90.57% 00:17:11.752 sdb: ios=5571/3072, merge=0/0, ticks=517/301, in_queue=818, util=91.04% 00:17:11.752 05:06:26 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:17:11.752 [global] 00:17:11.752 thread=1 00:17:11.752 invalidate=1 00:17:11.752 rw=randrw 00:17:11.752 time_based=1 00:17:11.752 runtime=1 00:17:11.752 ioengine=libaio 00:17:11.752 direct=1 00:17:11.752 bs=131072 00:17:11.752 iodepth=32 00:17:11.752 norandommap=0 00:17:11.752 numjobs=1 00:17:11.752 00:17:11.752 verify_dump=1 00:17:11.752 verify_backlog=512 00:17:11.752 verify_state_save=0 00:17:11.752 do_verify=1 00:17:11.752 verify=crc32c-intel 00:17:11.752 [job0] 00:17:11.752 filename=/dev/sda 00:17:11.752 [job1] 00:17:11.752 filename=/dev/sdb 00:17:11.752 queue_depth set to 113 (sda) 00:17:11.752 queue_depth set to 113 (sdb) 00:17:11.752 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:17:11.752 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:17:11.752 fio-3.35 00:17:11.752 Starting 2 threads 00:17:11.752 [2024-07-24 05:06:26.341949] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:11.752 [2024-07-24 05:06:26.346310] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:12.718 [2024-07-24 05:06:27.305062] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:12.977 [2024-07-24 05:06:27.484583] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:12.977 00:17:12.977 job0: (groupid=0, jobs=1): err= 0: pid=76998: Wed Jul 24 05:06:27 2024 00:17:12.977 read: IOPS=1613, BW=202MiB/s (211MB/s)(206MiB/1021msec) 00:17:12.977 slat (usec): min=8, max=163, avg=18.87, stdev= 9.88 00:17:12.977 clat (usec): min=1234, max=34893, avg=7054.89, stdev=5503.62 00:17:12.977 lat (usec): min=1268, max=34903, avg=7073.76, stdev=5502.84 00:17:12.977 clat percentiles (usec): 00:17:12.977 | 1.00th=[ 1369], 5.00th=[ 1565], 10.00th=[ 1696], 20.00th=[ 2040], 00:17:12.977 | 30.00th=[ 3359], 40.00th=[ 5604], 50.00th=[ 6063], 60.00th=[ 6718], 00:17:12.977 | 70.00th=[ 7308], 80.00th=[ 9372], 90.00th=[16188], 95.00th=[19268], 00:17:12.977 | 99.00th=[22938], 99.50th=[28181], 99.90th=[32637], 99.95th=[34866], 00:17:12.977 | 99.99th=[34866] 00:17:12.977 bw ( KiB/s): min=98304, max=122356, per=33.27%, avg=110330.00, stdev=17007.33, samples=2 00:17:12.977 iops : min= 768, max= 955, avg=861.50, stdev=132.23, samples=2 00:17:12.977 write: IOPS=961, BW=120MiB/s (126MB/s)(111MiB/927msec); 0 zone resets 00:17:12.977 slat (usec): min=41, max=5884, avg=93.66, stdev=196.40 00:17:12.977 clat (usec): min=9156, max=45641, avg=23287.60, stdev=3695.00 00:17:12.977 lat (usec): min=9241, max=46335, avg=23381.26, stdev=3706.99 00:17:12.977 clat percentiles (usec): 00:17:12.977 | 1.00th=[15533], 5.00th=[18744], 10.00th=[20579], 20.00th=[21365], 00:17:12.977 | 30.00th=[21890], 40.00th=[22414], 50.00th=[22676], 60.00th=[23200], 00:17:12.977 | 70.00th=[23725], 80.00th=[24773], 90.00th=[25822], 95.00th=[28705], 00:17:12.977 | 99.00th=[40633], 99.50th=[43779], 99.90th=[45876], 99.95th=[45876], 00:17:12.977 | 99.99th=[45876] 00:17:12.977 bw ( KiB/s): min=91136, max=131334, per=45.29%, avg=111235.00, stdev=28424.28, samples=2 00:17:12.977 iops : min= 712, max= 1026, avg=869.00, stdev=222.03, samples=2 00:17:12.977 lat (msec) : 2=12.53%, 4=8.75%, 10=31.84%, 20=11.98%, 50=34.91% 00:17:12.977 cpu : usr=10.59%, sys=4.80%, ctx=2174, majf=0, minf=9 00:17:12.977 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=96.3%, >=64=0.0% 00:17:12.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.977 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:17:12.977 issued rwts: total=1647,891,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.977 latency : target=0, window=0, percentile=100.00%, depth=32 00:17:12.977 job1: (groupid=0, jobs=1): err= 0: pid=77001: Wed Jul 24 05:06:27 2024 00:17:12.977 read: IOPS=977, BW=122MiB/s (128MB/s)(125MiB/1021msec) 00:17:12.977 slat (usec): min=6, max=471, avg=20.35, stdev=20.53 00:17:12.977 clat (usec): min=1172, max=31308, avg=6578.65, stdev=5960.73 00:17:12.977 lat (usec): min=1192, max=31318, avg=6599.00, stdev=5958.72 00:17:12.977 clat percentiles (usec): 00:17:12.977 | 1.00th=[ 1336], 5.00th=[ 1483], 10.00th=[ 1598], 20.00th=[ 1811], 00:17:12.977 | 30.00th=[ 2114], 40.00th=[ 2671], 50.00th=[ 4080], 60.00th=[ 5866], 00:17:12.977 | 70.00th=[ 7832], 80.00th=[11338], 90.00th=[16319], 95.00th=[19006], 00:17:12.977 | 99.00th=[23462], 99.50th=[25822], 99.90th=[31327], 99.95th=[31327], 00:17:12.977 | 99.99th=[31327] 00:17:12.977 bw ( KiB/s): min=119040, max=134656, per=38.25%, avg=126848.00, stdev=11042.18, samples=2 00:17:12.977 iops : min= 930, max= 1052, avg=991.00, stdev=86.27, samples=2 00:17:12.977 write: IOPS=1046, BW=131MiB/s (137MB/s)(134MiB/1021msec); 0 zone resets 00:17:12.977 slat (usec): min=27, max=684, avg=83.50, stdev=37.44 00:17:12.977 clat (usec): min=1739, max=44364, avg=24271.02, stdev=5353.24 00:17:12.977 lat (usec): min=1815, max=44422, avg=24354.52, stdev=5357.46 00:17:12.977 clat percentiles (usec): 00:17:12.977 | 1.00th=[ 8455], 5.00th=[18220], 10.00th=[20317], 20.00th=[21627], 00:17:12.977 | 30.00th=[22152], 40.00th=[22676], 50.00th=[23200], 60.00th=[23987], 00:17:12.977 | 70.00th=[25035], 80.00th=[26084], 90.00th=[30016], 95.00th=[36439], 00:17:12.977 | 99.00th=[42206], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:17:12.977 | 99.99th=[44303] 00:17:12.977 bw ( KiB/s): min=124928, max=142080, per=54.36%, avg=133504.00, stdev=12128.30, samples=2 00:17:12.977 iops : min= 976, max= 1110, avg=1043.00, stdev=94.75, samples=2 00:17:12.977 lat (msec) : 2=13.41%, 4=10.70%, 10=13.46%, 20=13.55%, 50=48.89% 00:17:12.977 cpu : usr=7.06%, sys=5.10%, ctx=1721, majf=0, minf=11 00:17:12.977 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0% 00:17:12.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:17:12.977 issued rwts: total=998,1068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.977 latency : target=0, window=0, percentile=100.00%, depth=32 00:17:12.977 00:17:12.977 Run status group 0 (all jobs): 00:17:12.977 READ: bw=324MiB/s (340MB/s), 122MiB/s-202MiB/s (128MB/s-211MB/s), io=331MiB (347MB), run=1021-1021msec 00:17:12.977 WRITE: bw=240MiB/s (251MB/s), 120MiB/s-131MiB/s (126MB/s-137MB/s), io=245MiB (257MB), run=927-1021msec 00:17:12.977 00:17:12.977 Disk stats (read/write): 00:17:12.977 sda: ios=1492/778, merge=0/0, ticks=9761/17532, in_queue=27292, util=89.67% 00:17:12.977 sdb: ios=893/926, merge=0/0, ticks=5338/21896, in_queue=27235, util=90.03% 00:17:12.977 05:06:27 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@101 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 524288 -d 128 -t randrw -r 1 -v 00:17:12.977 [global] 00:17:12.977 thread=1 00:17:12.977 invalidate=1 00:17:12.977 rw=randrw 00:17:12.977 time_based=1 00:17:12.977 runtime=1 00:17:12.977 ioengine=libaio 00:17:12.977 direct=1 00:17:12.977 bs=524288 00:17:12.977 iodepth=128 00:17:12.977 norandommap=0 00:17:12.977 numjobs=1 00:17:12.977 00:17:12.977 verify_dump=1 00:17:12.977 verify_backlog=512 00:17:12.977 verify_state_save=0 00:17:12.977 do_verify=1 00:17:12.977 verify=crc32c-intel 00:17:12.977 [job0] 00:17:12.977 filename=/dev/sda 00:17:12.977 [job1] 00:17:12.977 filename=/dev/sdb 00:17:13.235 queue_depth set to 113 (sda) 00:17:13.235 queue_depth set to 113 (sdb) 00:17:13.235 job0: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:17:13.235 job1: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:17:13.235 fio-3.35 00:17:13.235 Starting 2 threads 00:17:13.235 [2024-07-24 05:06:27.724406] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:13.235 [2024-07-24 05:06:27.731856] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:14.177 [2024-07-24 05:06:28.735776] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:14.436 [2024-07-24 05:06:28.983125] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:14.436 00:17:14.436 job0: (groupid=0, jobs=1): err= 0: pid=77069: Wed Jul 24 05:06:29 2024 00:17:14.436 read: IOPS=360, BW=180MiB/s (189MB/s)(200MiB/1111msec) 00:17:14.436 slat (usec): min=21, max=38413, avg=1416.69, stdev=3966.95 00:17:14.436 clat (msec): min=62, max=367, avg=192.80, stdev=72.64 00:17:14.436 lat (msec): min=78, max=367, avg=194.22, stdev=72.74 00:17:14.436 clat percentiles (msec): 00:17:14.436 | 1.00th=[ 80], 5.00th=[ 93], 10.00th=[ 110], 20.00th=[ 138], 00:17:14.436 | 30.00th=[ 155], 40.00th=[ 169], 50.00th=[ 178], 60.00th=[ 188], 00:17:14.436 | 70.00th=[ 201], 80.00th=[ 249], 90.00th=[ 300], 95.00th=[ 368], 00:17:14.436 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:17:14.436 | 99.99th=[ 368] 00:17:14.436 bw ( KiB/s): min=90112, max=157696, per=37.55%, avg=123904.00, stdev=47789.10, samples=2 00:17:14.436 iops : min= 176, max= 308, avg=242.00, stdev=93.34, samples=2 00:17:14.436 write: IOPS=374, BW=187MiB/s (196MB/s)(135MiB/721msec); 0 zone resets 00:17:14.436 slat (usec): min=170, max=10271, avg=1350.90, stdev=2298.83 00:17:14.436 clat (msec): min=85, max=295, avg=183.15, stdev=41.76 00:17:14.436 lat (msec): min=86, max=300, avg=184.50, stdev=41.98 00:17:14.436 clat percentiles (msec): 00:17:14.436 | 1.00th=[ 95], 5.00th=[ 105], 10.00th=[ 114], 20.00th=[ 146], 00:17:14.436 | 30.00th=[ 176], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 197], 00:17:14.436 | 70.00th=[ 203], 80.00th=[ 209], 90.00th=[ 218], 95.00th=[ 255], 00:17:14.436 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 296], 99.95th=[ 296], 00:17:14.436 | 99.99th=[ 296] 00:17:14.436 bw ( KiB/s): min=133120, max=143360, per=46.02%, avg=138240.00, stdev=7240.77, samples=2 00:17:14.436 iops : min= 260, max= 280, avg=270.00, stdev=14.14, samples=2 00:17:14.436 lat (msec) : 100=4.63%, 250=80.75%, 500=14.63% 00:17:14.436 cpu : usr=10.36%, sys=1.53%, ctx=275, majf=0, minf=7 00:17:14.436 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.6%, >=64=81.2% 00:17:14.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.436 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:14.436 issued rwts: total=400,270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:14.436 job1: (groupid=0, jobs=1): err= 0: pid=77070: Wed Jul 24 05:06:29 2024 00:17:14.436 read: IOPS=296, BW=148MiB/s (155MB/s)(158MiB/1067msec) 00:17:14.437 slat (usec): min=21, max=21372, avg=1304.73, stdev=2841.39 00:17:14.437 clat (msec): min=63, max=281, avg=179.58, stdev=61.29 00:17:14.437 lat (msec): min=69, max=281, avg=180.88, stdev=61.48 00:17:14.437 clat percentiles (msec): 00:17:14.437 | 1.00th=[ 70], 5.00th=[ 104], 10.00th=[ 112], 20.00th=[ 122], 00:17:14.437 | 30.00th=[ 130], 40.00th=[ 138], 50.00th=[ 165], 60.00th=[ 197], 00:17:14.437 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 259], 95.00th=[ 268], 00:17:14.437 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 284], 00:17:14.437 | 99.99th=[ 284] 00:17:14.437 bw ( KiB/s): min=78848, max=188416, per=40.50%, avg=133632.00, stdev=77476.28, samples=2 00:17:14.437 iops : min= 154, max= 368, avg=261.00, stdev=151.32, samples=2 00:17:14.437 write: IOPS=333, BW=167MiB/s (175MB/s)(178MiB/1067msec); 0 zone resets 00:17:14.437 slat (usec): min=163, max=17994, avg=1648.58, stdev=2580.65 00:17:14.437 clat (msec): min=69, max=303, avg=200.54, stdev=64.51 00:17:14.437 lat (msec): min=69, max=304, avg=202.19, stdev=64.96 00:17:14.437 clat percentiles (msec): 00:17:14.437 | 1.00th=[ 75], 5.00th=[ 105], 10.00th=[ 125], 20.00th=[ 142], 00:17:14.437 | 30.00th=[ 150], 40.00th=[ 163], 50.00th=[ 184], 60.00th=[ 232], 00:17:14.437 | 70.00th=[ 266], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 288], 00:17:14.437 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:17:14.437 | 99.99th=[ 305] 00:17:14.437 bw ( KiB/s): min=83968, max=205824, per=48.24%, avg=144896.00, stdev=86165.20, samples=2 00:17:14.437 iops : min= 164, max= 402, avg=283.00, stdev=168.29, samples=2 00:17:14.437 lat (msec) : 100=4.32%, 250=66.37%, 500=29.32% 00:17:14.437 cpu : usr=9.94%, sys=2.44%, ctx=267, majf=0, minf=5 00:17:14.437 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:17:14.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.437 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:14.437 issued rwts: total=316,356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:14.437 00:17:14.437 Run status group 0 (all jobs): 00:17:14.437 READ: bw=322MiB/s (338MB/s), 148MiB/s-180MiB/s (155MB/s-189MB/s), io=358MiB (375MB), run=1067-1111msec 00:17:14.437 WRITE: bw=293MiB/s (308MB/s), 167MiB/s-187MiB/s (175MB/s-196MB/s), io=313MiB (328MB), run=721-1067msec 00:17:14.437 00:17:14.437 Disk stats (read/write): 00:17:14.437 sda: ios=418/270, merge=0/0, ticks=30144/22227, in_queue=52372, util=81.92% 00:17:14.437 sdb: ios=365/355, merge=0/0, ticks=22846/33719, in_queue=56566, util=80.92% 00:17:14.696 05:06:29 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 1024 -t read -r 1 -n 4 00:17:14.696 [global] 00:17:14.696 thread=1 00:17:14.696 invalidate=1 00:17:14.696 rw=read 00:17:14.696 time_based=1 00:17:14.696 runtime=1 00:17:14.696 ioengine=libaio 00:17:14.696 direct=1 00:17:14.696 bs=1048576 00:17:14.696 iodepth=1024 00:17:14.696 norandommap=1 00:17:14.696 numjobs=4 00:17:14.696 00:17:14.696 [job0] 00:17:14.696 filename=/dev/sda 00:17:14.696 [job1] 00:17:14.696 filename=/dev/sdb 00:17:14.696 queue_depth set to 113 (sda) 00:17:14.696 queue_depth set to 113 (sdb) 00:17:14.696 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:17:14.696 ... 00:17:14.696 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:17:14.696 ... 00:17:14.696 fio-3.35 00:17:14.696 Starting 8 threads 00:17:16.602 00:17:16.602 job0: (groupid=0, jobs=1): err= 0: pid=77137: Wed Jul 24 05:06:31 2024 00:17:16.602 read: IOPS=18, BW=18.4MiB/s (19.3MB/s)(32.0MiB/1738msec) 00:17:16.602 slat (usec): min=491, max=621286, avg=31442.71, stdev=113755.18 00:17:16.602 clat (msec): min=731, max=1737, avg=1228.16, stdev=444.19 00:17:16.602 lat (msec): min=739, max=1737, avg=1259.61, stdev=443.51 00:17:16.602 clat percentiles (msec): 00:17:16.602 | 1.00th=[ 735], 5.00th=[ 743], 10.00th=[ 743], 20.00th=[ 743], 00:17:16.602 | 30.00th=[ 743], 40.00th=[ 760], 50.00th=[ 1401], 60.00th=[ 1636], 00:17:16.602 | 70.00th=[ 1653], 80.00th=[ 1687], 90.00th=[ 1703], 95.00th=[ 1720], 00:17:16.602 | 99.00th=[ 1737], 99.50th=[ 1737], 99.90th=[ 1737], 99.95th=[ 1737], 00:17:16.602 | 99.99th=[ 1737] 00:17:16.602 lat (msec) : 750=37.50%, 1000=6.25%, 2000=56.25% 00:17:16.602 cpu : usr=0.00%, sys=1.55%, ctx=48, majf=0, minf=8193 00:17:16.602 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:17:16.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.602 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:16.602 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.602 latency : target=0, window=0, percentile=100.00%, depth=1024 00:17:16.602 job0: (groupid=0, jobs=1): err= 0: pid=77138: Wed Jul 24 05:06:31 2024 00:17:16.602 read: IOPS=25, BW=25.9MiB/s (27.1MB/s)(45.0MiB/1738msec) 00:17:16.602 slat (usec): min=514, max=621225, avg=22359.87, stdev=96936.12 00:17:16.602 clat (msec): min=731, max=1736, avg=1612.71, stdev=224.62 00:17:16.602 lat (msec): min=741, max=1737, avg=1635.07, stdev=180.70 00:17:16.602 clat percentiles (msec): 00:17:16.602 | 1.00th=[ 735], 5.00th=[ 1368], 10.00th=[ 1368], 20.00th=[ 1603], 00:17:16.602 | 30.00th=[ 1636], 40.00th=[ 1670], 50.00th=[ 1720], 60.00th=[ 1720], 00:17:16.602 | 70.00th=[ 1720], 80.00th=[ 1720], 90.00th=[ 1737], 95.00th=[ 1737], 00:17:16.602 | 99.00th=[ 1737], 99.50th=[ 1737], 99.90th=[ 1737], 99.95th=[ 1737], 00:17:16.602 | 99.99th=[ 1737] 00:17:16.602 lat (msec) : 750=4.44%, 2000=95.56% 00:17:16.602 cpu : usr=0.00%, sys=2.01%, ctx=47, majf=0, minf=11521 00:17:16.602 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:17:16.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.602 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:16.602 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.602 latency : target=0, window=0, percentile=100.00%, depth=1024 00:17:16.602 job0: (groupid=0, jobs=1): err= 0: pid=77139: Wed Jul 24 05:06:31 2024 00:17:16.602 read: IOPS=30, BW=30.7MiB/s (32.2MB/s)(54.0MiB/1757msec) 00:17:16.602 slat (usec): min=525, max=635276, avg=18776.82, stdev=90237.48 00:17:16.602 clat (msec): min=741, max=1755, avg=1670.97, stdev=159.91 00:17:16.602 lat (msec): min=1377, max=1756, avg=1689.75, stdev=95.20 00:17:16.602 clat percentiles (msec): 00:17:16.602 | 1.00th=[ 743], 5.00th=[ 1385], 10.00th=[ 1603], 20.00th=[ 1653], 00:17:16.602 | 30.00th=[ 1687], 40.00th=[ 1720], 50.00th=[ 1720], 60.00th=[ 1720], 00:17:16.602 | 70.00th=[ 1737], 80.00th=[ 1754], 90.00th=[ 1754], 95.00th=[ 1754], 00:17:16.602 | 99.00th=[ 1754], 99.50th=[ 1754], 99.90th=[ 1754], 99.95th=[ 1754], 00:17:16.602 | 99.99th=[ 1754] 00:17:16.602 lat (msec) : 750=1.85%, 2000=98.15% 00:17:16.602 cpu : usr=0.00%, sys=2.33%, ctx=39, majf=0, minf=13825 00:17:16.602 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:17:16.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.602 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:16.602 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.602 latency : target=0, window=0, percentile=100.00%, depth=1024 00:17:16.602 job0: (groupid=0, jobs=1): err= 0: pid=77140: Wed Jul 24 05:06:31 2024 00:17:16.602 read: IOPS=23, BW=23.5MiB/s (24.7MB/s)(41.0MiB/1742msec) 00:17:16.602 slat (usec): min=590, max=630805, avg=24741.75, stdev=102797.64 00:17:16.602 clat (msec): min=726, max=1738, avg=1623.80, stdev=187.45 00:17:16.602 lat (msec): min=1357, max=1741, avg=1648.55, stdev=121.40 00:17:16.602 clat percentiles (msec): 00:17:16.602 | 1.00th=[ 726], 5.00th=[ 1368], 10.00th=[ 1368], 20.00th=[ 1620], 00:17:16.602 | 30.00th=[ 1653], 40.00th=[ 1670], 50.00th=[ 1703], 60.00th=[ 1720], 00:17:16.602 | 70.00th=[ 1720], 80.00th=[ 1720], 90.00th=[ 1737], 95.00th=[ 1737], 00:17:16.602 | 99.00th=[ 1737], 99.50th=[ 1737], 99.90th=[ 1737], 99.95th=[ 1737], 00:17:16.602 | 99.99th=[ 1737] 00:17:16.602 lat (msec) : 750=2.44%, 2000=97.56% 00:17:16.602 cpu : usr=0.06%, sys=1.84%, ctx=44, majf=0, minf=10497 00:17:16.602 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:17:16.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.602 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:16.602 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.602 latency : target=0, window=0, percentile=100.00%, depth=1024 00:17:16.602 job1: (groupid=0, jobs=1): err= 0: pid=77141: Wed Jul 24 05:06:31 2024 00:17:16.602 read: IOPS=19, BW=19.3MiB/s (20.2MB/s)(34.0MiB/1765msec) 00:17:16.602 slat (usec): min=522, max=645093, avg=29665.18, stdev=115473.76 00:17:16.602 clat (msec): min=755, max=1762, avg=1622.41, stdev=283.64 00:17:16.602 lat (msec): min=764, max=1764, avg=1652.07, stdev=239.59 00:17:16.602 clat percentiles (msec): 00:17:16.602 | 1.00th=[ 760], 5.00th=[ 768], 10.00th=[ 1418], 20.00th=[ 1653], 00:17:16.602 | 30.00th=[ 1687], 40.00th=[ 1720], 50.00th=[ 1737], 60.00th=[ 1737], 00:17:16.602 | 70.00th=[ 1754], 80.00th=[ 1754], 90.00th=[ 1754], 95.00th=[ 1754], 00:17:16.602 | 99.00th=[ 1770], 99.50th=[ 1770], 99.90th=[ 1770], 99.95th=[ 1770], 00:17:16.602 | 99.99th=[ 1770] 00:17:16.602 lat (msec) : 1000=8.82%, 2000=91.18% 00:17:16.602 cpu : usr=0.00%, sys=1.36%, ctx=45, majf=0, minf=8705 00:17:16.602 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:17:16.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.602 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:16.602 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.602 latency : target=0, window=0, percentile=100.00%, depth=1024 00:17:16.602 job1: (groupid=0, jobs=1): err= 0: pid=77142: Wed Jul 24 05:06:31 2024 00:17:16.602 read: IOPS=13, BW=13.2MiB/s (13.9MB/s)(23.0MiB/1739msec) 00:17:16.602 slat (usec): min=821, max=621269, avg=43478.93, stdev=134403.27 00:17:16.602 clat (msec): min=738, max=1735, avg=1531.23, stdev=334.26 00:17:16.602 lat (msec): min=747, max=1738, avg=1574.71, stdev=288.27 00:17:16.602 clat percentiles (msec): 00:17:16.602 | 1.00th=[ 735], 5.00th=[ 751], 10.00th=[ 760], 20.00th=[ 1385], 00:17:16.603 | 30.00th=[ 1418], 40.00th=[ 1670], 50.00th=[ 1703], 60.00th=[ 1720], 00:17:16.603 | 70.00th=[ 1737], 80.00th=[ 1737], 90.00th=[ 1737], 95.00th=[ 1737], 00:17:16.603 | 99.00th=[ 1737], 99.50th=[ 1737], 99.90th=[ 1737], 99.95th=[ 1737], 00:17:16.603 | 99.99th=[ 1737] 00:17:16.603 lat (msec) : 750=8.70%, 1000=4.35%, 2000=86.96% 00:17:16.603 cpu : usr=0.00%, sys=1.15%, ctx=50, majf=0, minf=5889 00:17:16.603 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:17:16.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.603 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:16.603 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.603 latency : target=0, window=0, percentile=100.00%, depth=1024 00:17:16.603 job1: (groupid=0, jobs=1): err= 0: pid=77143: Wed Jul 24 05:06:31 2024 00:17:16.603 read: IOPS=12, BW=12.1MiB/s (12.7MB/s)(21.0MiB/1736msec) 00:17:16.603 slat (usec): min=539, max=621303, avg=47625.37, stdev=139045.24 00:17:16.603 clat (msec): min=735, max=1733, avg=1487.93, stdev=337.01 00:17:16.603 lat (msec): min=753, max=1735, avg=1535.55, stdev=293.22 00:17:16.603 clat percentiles (msec): 00:17:16.603 | 1.00th=[ 735], 5.00th=[ 751], 10.00th=[ 751], 20.00th=[ 1385], 00:17:16.603 | 30.00th=[ 1401], 40.00th=[ 1620], 50.00th=[ 1653], 60.00th=[ 1687], 00:17:16.603 | 70.00th=[ 1720], 80.00th=[ 1737], 90.00th=[ 1737], 95.00th=[ 1737], 00:17:16.603 | 99.00th=[ 1737], 99.50th=[ 1737], 99.90th=[ 1737], 99.95th=[ 1737], 00:17:16.603 | 99.99th=[ 1737] 00:17:16.603 lat (msec) : 750=4.76%, 1000=9.52%, 2000=85.71% 00:17:16.603 cpu : usr=0.00%, sys=1.04%, ctx=45, majf=0, minf=5377 00:17:16.603 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:17:16.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.603 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:16.603 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.603 latency : target=0, window=0, percentile=100.00%, depth=1024 00:17:16.603 job1: (groupid=0, jobs=1): err= 0: pid=77144: Wed Jul 24 05:06:31 2024 00:17:16.603 read: IOPS=23, BW=23.1MiB/s (24.3MB/s)(41.0MiB/1772msec) 00:17:16.603 slat (usec): min=524, max=630838, avg=24819.85, stdev=103806.14 00:17:16.603 clat (msec): min=753, max=1769, avg=1692.25, stdev=171.49 00:17:16.603 lat (msec): min=1384, max=1771, avg=1717.07, stdev=83.04 00:17:16.603 clat percentiles (msec): 00:17:16.603 | 1.00th=[ 751], 5.00th=[ 1385], 10.00th=[ 1653], 20.00th=[ 1687], 00:17:16.603 | 30.00th=[ 1737], 40.00th=[ 1737], 50.00th=[ 1737], 60.00th=[ 1754], 00:17:16.603 | 70.00th=[ 1754], 80.00th=[ 1754], 90.00th=[ 1770], 95.00th=[ 1770], 00:17:16.603 | 99.00th=[ 1770], 99.50th=[ 1770], 99.90th=[ 1770], 99.95th=[ 1770], 00:17:16.603 | 99.99th=[ 1770] 00:17:16.603 lat (msec) : 1000=2.44%, 2000=97.56% 00:17:16.603 cpu : usr=0.00%, sys=1.52%, ctx=47, majf=0, minf=10497 00:17:16.603 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:17:16.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.603 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:16.603 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.603 latency : target=0, window=0, percentile=100.00%, depth=1024 00:17:16.603 00:17:16.603 Run status group 0 (all jobs): 00:17:16.603 READ: bw=164MiB/s (172MB/s), 12.1MiB/s-30.7MiB/s (12.7MB/s-32.2MB/s), io=291MiB (305MB), run=1736-1772msec 00:17:16.603 00:17:16.603 Disk stats (read/write): 00:17:16.603 sda: ios=128/0, merge=0/0, ticks=38142/0, in_queue=38142, util=93.71% 00:17:16.603 sdb: ios=76/0, merge=0/0, ticks=27345/0, in_queue=27344, util=92.02% 00:17:16.862 05:06:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@104 -- # '[' 1 -eq 1 ']' 00:17:16.862 05:06:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t write -r 300 -v 00:17:16.862 [global] 00:17:16.862 thread=1 00:17:16.862 invalidate=1 00:17:16.862 rw=write 00:17:16.862 time_based=1 00:17:16.862 runtime=300 00:17:16.862 ioengine=libaio 00:17:16.862 direct=1 00:17:16.862 bs=4096 00:17:16.862 iodepth=1 00:17:16.862 norandommap=0 00:17:16.862 numjobs=1 00:17:16.862 00:17:16.862 verify_dump=1 00:17:16.862 verify_backlog=512 00:17:16.862 verify_state_save=0 00:17:16.862 do_verify=1 00:17:16.862 verify=crc32c-intel 00:17:16.862 [job0] 00:17:16.862 filename=/dev/sda 00:17:16.862 [job1] 00:17:16.863 filename=/dev/sdb 00:17:16.863 queue_depth set to 113 (sda) 00:17:16.863 queue_depth set to 113 (sdb) 00:17:16.863 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.863 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.863 fio-3.35 00:17:16.863 Starting 2 threads 00:17:16.863 [2024-07-24 05:06:31.460854] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:16.863 [2024-07-24 05:06:31.467470] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:24.969 [2024-07-24 05:06:38.543037] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:31.565 [2024-07-24 05:06:45.750024] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:39.677 [2024-07-24 05:06:52.955762] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:46.296 [2024-07-24 05:07:00.052611] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:52.855 [2024-07-24 05:07:07.134851] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:00.986 [2024-07-24 05:07:14.203311] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:07.546 [2024-07-24 05:07:21.199786] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:14.142 [2024-07-24 05:07:28.264241] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:14.142 [2024-07-24 05:07:28.283631] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:20.702 [2024-07-24 05:07:34.735480] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:27.263 [2024-07-24 05:07:41.086125] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:33.880 [2024-07-24 05:07:47.447192] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:40.440 [2024-07-24 05:07:53.836439] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:45.746 [2024-07-24 05:08:00.220535] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:52.305 [2024-07-24 05:08:06.613610] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:58.865 [2024-07-24 05:08:12.887835] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:05.436 [2024-07-24 05:08:19.082023] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:05.436 [2024-07-24 05:08:19.085804] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.725 [2024-07-24 05:08:25.287346] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.359 [2024-07-24 05:08:31.478901] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:23.964 [2024-07-24 05:08:37.674627] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:29.251 [2024-07-24 05:08:43.866444] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:35.821 [2024-07-24 05:08:50.072701] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:42.437 [2024-07-24 05:08:56.338646] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:49.001 [2024-07-24 05:09:02.577975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:54.270 [2024-07-24 05:09:08.711014] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:54.270 [2024-07-24 05:09:08.823678] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:00.840 [2024-07-24 05:09:15.140451] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:07.459 [2024-07-24 05:09:21.808791] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:14.021 [2024-07-24 05:09:28.448921] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:20.583 [2024-07-24 05:09:35.100303] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:27.139 [2024-07-24 05:09:41.745888] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:33.694 [2024-07-24 05:09:48.274718] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:40.250 [2024-07-24 05:09:54.794050] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:46.833 [2024-07-24 05:10:01.048375] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:46.833 [2024-07-24 05:10:01.155851] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:53.387 [2024-07-24 05:10:07.469259] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:59.946 [2024-07-24 05:10:13.761213] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:06.495 [2024-07-24 05:10:20.059824] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:11.765 [2024-07-24 05:10:26.354993] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:18.353 [2024-07-24 05:10:32.652434] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:24.911 [2024-07-24 05:10:38.879602] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:31.496 [2024-07-24 05:10:45.093010] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:36.764 [2024-07-24 05:10:51.366498] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:37.022 [2024-07-24 05:10:51.492179] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:43.592 [2024-07-24 05:10:57.777949] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:50.147 [2024-07-24 05:11:03.995847] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:56.739 [2024-07-24 05:11:10.249107] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:03.305 [2024-07-24 05:11:16.751048] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:08.574 [2024-07-24 05:11:23.030093] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:15.143 [2024-07-24 05:11:29.291369] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:17.047 [2024-07-24 05:11:31.576916] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:17.047 [2024-07-24 05:11:31.580813] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:17.047 00:22:17.047 job0: (groupid=0, jobs=1): err= 0: pid=77195: Wed Jul 24 05:11:31 2024 00:22:17.047 read: IOPS=5063, BW=19.8MiB/s (20.7MB/s)(5934MiB/299996msec) 00:22:17.047 slat (usec): min=2, max=2834, avg= 5.94, stdev= 5.90 00:22:17.047 clat (nsec): min=1095, max=4980.1k, avg=90545.14, stdev=15322.35 00:22:17.047 lat (usec): min=57, max=4984, avg=96.49, stdev=15.16 00:22:17.047 clat percentiles (usec): 00:22:17.047 | 1.00th=[ 59], 5.00th=[ 79], 10.00th=[ 82], 20.00th=[ 83], 00:22:17.047 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 92], 00:22:17.047 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 104], 95.00th=[ 110], 00:22:17.047 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 174], 99.95th=[ 206], 00:22:17.047 | 99.99th=[ 363] 00:22:17.047 write: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(5935MiB/299996msec); 0 zone resets 00:22:17.047 slat (usec): min=3, max=668, avg= 6.70, stdev= 5.38 00:22:17.047 clat (nsec): min=1057, max=4007.0k, avg=92680.80, stdev=17822.14 00:22:17.047 lat (usec): min=56, max=4013, avg=99.39, stdev=17.46 00:22:17.047 clat percentiles (usec): 00:22:17.047 | 1.00th=[ 56], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 85], 00:22:17.047 | 30.00th=[ 87], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 94], 00:22:17.047 | 70.00th=[ 97], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 114], 00:22:17.047 | 99.00th=[ 128], 99.50th=[ 135], 99.90th=[ 202], 99.95th=[ 269], 00:22:17.047 | 99.99th=[ 502] 00:22:17.047 bw ( KiB/s): min=16384, max=23632, per=50.05%, avg=20283.14, stdev=1195.57, samples=599 00:22:17.047 iops : min= 4096, max= 5908, avg=5070.71, stdev=298.87, samples=599 00:22:17.047 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.15% 00:22:17.047 lat (usec) : 100=81.78%, 250=18.01%, 500=0.03%, 750=0.01%, 1000=0.01% 00:22:17.047 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:22:17.047 cpu : usr=2.82%, sys=6.26%, ctx=3125996, majf=0, minf=1 00:22:17.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:17.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.047 issued rwts: total=1519104,1519321,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:17.047 job1: (groupid=0, jobs=1): err= 0: pid=77196: Wed Jul 24 05:11:31 2024 00:22:17.047 read: IOPS=5065, BW=19.8MiB/s (20.7MB/s)(5936MiB/300000msec) 00:22:17.047 slat (usec): min=2, max=1208, avg= 3.99, stdev= 3.45 00:22:17.047 clat (nsec): min=1061, max=7892.5k, avg=91039.76, stdev=16001.42 00:22:17.047 lat (usec): min=44, max=7896, avg=95.03, stdev=15.87 00:22:17.047 clat percentiles (usec): 00:22:17.047 | 1.00th=[ 74], 5.00th=[ 79], 10.00th=[ 82], 20.00th=[ 84], 00:22:17.047 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 91], 00:22:17.047 | 70.00th=[ 95], 80.00th=[ 98], 90.00th=[ 103], 95.00th=[ 111], 00:22:17.047 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 190], 99.95th=[ 227], 00:22:17.047 | 99.99th=[ 379] 00:22:17.047 write: IOPS=5066, BW=19.8MiB/s (20.8MB/s)(5937MiB/300000msec); 0 zone resets 00:22:17.047 slat (usec): min=3, max=637, avg= 5.59, stdev= 4.23 00:22:17.047 clat (nsec): min=1052, max=4982.1k, avg=95317.07, stdev=17963.54 00:22:17.047 lat (usec): min=52, max=4988, avg=100.90, stdev=17.49 00:22:17.047 clat percentiles (usec): 00:22:17.047 | 1.00th=[ 68], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 87], 00:22:17.047 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 94], 60.00th=[ 97], 00:22:17.047 | 70.00th=[ 100], 80.00th=[ 104], 90.00th=[ 111], 95.00th=[ 117], 00:22:17.047 | 99.00th=[ 131], 99.50th=[ 139], 99.90th=[ 210], 99.95th=[ 269], 00:22:17.047 | 99.99th=[ 510] 00:22:17.047 bw ( KiB/s): min=16384, max=22925, per=50.07%, avg=20289.99, stdev=1209.22, samples=599 00:22:17.047 iops : min= 4096, max= 5731, avg=5072.41, stdev=302.28, samples=599 00:22:17.047 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.05% 00:22:17.047 lat (usec) : 100=78.04%, 250=21.84%, 500=0.04%, 750=0.01%, 1000=0.01% 00:22:17.047 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:22:17.047 cpu : usr=2.61%, sys=5.03%, ctx=3247829, majf=0, minf=2 00:22:17.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:17.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.047 issued rwts: total=1519616,1519902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:17.048 00:22:17.048 Run status group 0 (all jobs): 00:22:17.048 READ: bw=39.6MiB/s (41.5MB/s), 19.8MiB/s-19.8MiB/s (20.7MB/s-20.7MB/s), io=11.6GiB (12.4GB), run=299996-300000msec 00:22:17.048 WRITE: bw=39.6MiB/s (41.5MB/s), 19.8MiB/s-19.8MiB/s (20.7MB/s-20.8MB/s), io=11.6GiB (12.4GB), run=299996-300000msec 00:22:17.048 00:22:17.048 Disk stats (read/write): 00:22:17.048 sda: ios=1520848/1518693, merge=0/0, ticks=134254/136091, in_queue=270344, util=100.00% 00:22:17.048 sdb: ios=1519392/1519275, merge=0/0, ticks=123827/131747, in_queue=255575, util=100.00% 00:22:17.048 05:11:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@116 -- # fio_pid=80619 00:22:17.048 05:11:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@118 -- # sleep 3 00:22:17.048 05:11:31 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 128 -t rw -r 10 00:22:17.048 [global] 00:22:17.048 thread=1 00:22:17.048 invalidate=1 00:22:17.048 rw=rw 00:22:17.048 time_based=1 00:22:17.048 runtime=10 00:22:17.048 ioengine=libaio 00:22:17.048 direct=1 00:22:17.048 bs=1048576 00:22:17.048 iodepth=128 00:22:17.048 norandommap=1 00:22:17.048 numjobs=1 00:22:17.048 00:22:17.048 [job0] 00:22:17.048 filename=/dev/sda 00:22:17.048 [job1] 00:22:17.048 filename=/dev/sdb 00:22:17.307 queue_depth set to 113 (sda) 00:22:17.307 queue_depth set to 113 (sdb) 00:22:17.307 job0: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:22:17.307 job1: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:22:17.307 fio-3.35 00:22:17.307 Starting 2 threads 00:22:17.307 [2024-07-24 05:11:31.807843] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:17.307 [2024-07-24 05:11:31.811530] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:20.594 05:11:34 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:22:20.594 [2024-07-24 05:11:34.865792] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (raid0) received event(SPDK_BDEV_EVENT_REMOVE) 00:22:20.594 [2024-07-24 05:11:34.953347] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 [2024-07-24 05:11:34.954566] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 [2024-07-24 05:11:34.956465] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 [2024-07-24 05:11:34.958104] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 [2024-07-24 05:11:34.959597] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 [2024-07-24 05:11:34.961273] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 [2024-07-24 05:11:34.962829] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 [2024-07-24 05:11:34.964341] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 [2024-07-24 05:11:34.966117] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 [2024-07-24 05:11:34.967601] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 [2024-07-24 05:11:34.969277] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 [2024-07-24 05:11:34.970839] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 [2024-07-24 05:11:34.972721] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 [2024-07-24 05:11:34.974262] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 [2024-07-24 05:11:34.976136] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 05:11:34 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:22:20.594 05:11:34 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:22:20.594 [2024-07-24 05:11:34.977991] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d94 00:22:20.594 fio: io_u error on file /dev/sda: Input/output error: read offset=99614720, buflen=1048576 00:22:20.594 fio: io_u error on file /dev/sda: Input/output error: read offset=82837504, buflen=1048576 00:22:20.853 05:11:35 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:22:20.853 05:11:35 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: read offset=83886080, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: write offset=133169152, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: write offset=112197632, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: write offset=0, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: write offset=113246208, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: write offset=1048576, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: write offset=2097152, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: write offset=114294784, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: write offset=115343360, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: write offset=116391936, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: read offset=84934656, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: read offset=100663296, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: read offset=101711872, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: read offset=85983232, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: read offset=102760448, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: read offset=87031808, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: write offset=111149056, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: read offset=88080384, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: write offset=117440512, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: write offset=118489088, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: write offset=119537664, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: read offset=89128960, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: read offset=103809024, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: read offset=90177536, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: read offset=91226112, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: read offset=104857600, buflen=1048576 00:22:21.113 fio: io_u error on file /dev/sda: Input/output error: write offset=3145728, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=105906176, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=120586240, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=4194304, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=92274688, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=93323264, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=5242880, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=94371840, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=121634816, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=106954752, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=6291456, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=108003328, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=109051904, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=7340032, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=95420416, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=110100480, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=111149056, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=96468992, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=8388608, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=122683392, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=123731968, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=112197632, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=124780544, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=9437184, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=125829120, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=113246208, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=126877696, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=10485760, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=114294784, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=127926272, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=115343360, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=128974848, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=116391936, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=130023424, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=131072000, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=11534336, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=12582912, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=132120576, buflen=1048576 00:22:21.114 fio: pid=80649, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=117440512, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=97517568, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=118489088, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=98566144, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=13631488, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=14680064, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=15728640, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=16777216, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=17825792, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=119537664, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=120586240, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=18874368, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=19922944, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=20971520, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=22020096, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=23068672, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=121634816, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=24117248, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=122683392, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=25165824, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=123731968, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=26214400, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=124780544, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=125829120, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=126877696, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=27262976, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=28311552, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=29360128, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=127926272, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=128974848, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=130023424, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=131072000, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=132120576, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=30408704, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=133169152, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=0, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=1048576, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=2097152, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=3145728, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=4194304, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=5242880, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=6291456, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=31457280, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=32505856, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=7340032, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=8388608, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=9437184, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=33554432, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=10485760, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=11534336, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=12582912, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=34603008, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=13631488, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=14680064, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=15728640, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=16777216, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=35651584, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=36700160, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=17825792, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: read offset=18874368, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=37748736, buflen=1048576 00:22:21.114 fio: io_u error on file /dev/sda: Input/output error: write offset=38797312, buflen=1048576 00:22:21.374 05:11:35 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:22:21.633 [2024-07-24 05:11:36.021306] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Malloc2) received event(SPDK_BDEV_EVENT_REMOVE) 00:22:21.633 [2024-07-24 05:11:36.021909] iscsi.c:4336:iscsi_pdu_payload_op_data: *ERROR*: Not found for transfer_tag=e7a 00:22:21.633 [2024-07-24 05:11:36.021959] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7a 00:22:21.633 [2024-07-24 05:11:36.025638] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7a 00:22:24.922 [2024-07-24 05:11:38.840953] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7a 00:22:24.922 [2024-07-24 05:11:38.842136] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7a 00:22:24.922 [2024-07-24 05:11:38.843364] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7a 00:22:24.922 [2024-07-24 05:11:38.844553] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7a 00:22:24.922 [2024-07-24 05:11:38.844652] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7a 00:22:24.922 [2024-07-24 05:11:38.844717] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7a 00:22:24.922 [2024-07-24 05:11:38.844778] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7a 00:22:24.922 [2024-07-24 05:11:38.844839] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7a 00:22:24.922 [2024-07-24 05:11:38.844897] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7a 00:22:24.922 [2024-07-24 05:11:38.844961] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7a 00:22:24.922 [2024-07-24 05:11:38.858806] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7a 00:22:24.922 [2024-07-24 05:11:38.858901] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7a 00:22:24.922 [2024-07-24 05:11:38.858961] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.859021] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.859079] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.859140] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.859195] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 05:11:38 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@131 -- # fio_status=0 00:22:24.922 05:11:38 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # wait 80619 00:22:24.922 [2024-07-24 05:11:38.868822] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.870345] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.871991] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.873477] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.875333] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.876764] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.878631] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.880078] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.881616] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.883471] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.884909] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7b 00:22:24.922 [2024-07-24 05:11:38.886670] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.888148] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.889693] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.891500] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.893163] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.894679] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.896393] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.898100] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.899697] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.901191] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.902876] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.904379] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.906238] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.907714] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.909419] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.910898] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7c 00:22:24.922 [2024-07-24 05:11:38.912702] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.914205] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.915722] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.917504] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.919146] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.920671] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.922422] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.923939] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.925652] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.927116] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.928608] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.930499] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.932146] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.933636] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.935311] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 [2024-07-24 05:11:38.936802] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=e7d 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: read offset=741343232, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: read offset=742391808, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: read offset=743440384, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: read offset=744488960, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=797966336, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=799014912, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=800063488, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=801112064, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=802160640, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=803209216, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=804257792, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=805306368, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=806354944, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=807403520, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=808452096, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=809500672, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=810549248, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=811597824, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=812646400, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=813694976, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=814743552, buflen=1048576 00:22:24.922 fio: io_u error on file /dev/sdb: Input/output error: write offset=815792128, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=816840704, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=817889280, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=745537536, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=746586112, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=747634688, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=748683264, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=749731840, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=750780416, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=751828992, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=752877568, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=753926144, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=754974720, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=756023296, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=757071872, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=758120448, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=759169024, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=760217600, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=761266176, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=762314752, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=763363328, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=764411904, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=765460480, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=766509056, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=767557632, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=768606208, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=793772032, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=794820608, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=795869184, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=796917760, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=818937856, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=819986432, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=769654784, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=770703360, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=821035008, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=822083584, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=823132160, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=771751936, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=772800512, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=824180736, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=826277888, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=827326464, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=773849088, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=774897664, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=828375040, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=775946240, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=829423616, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=830472192, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=831520768, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=825229312, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=832569344, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=776994816, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=778043392, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=779091968, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=833617920, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=780140544, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=834666496, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=781189120, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=782237696, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=835715072, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=836763648, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=783286272, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=784334848, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=837812224, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=785383424, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=838860800, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=786432000, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=839909376, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=840957952, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=842006528, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=843055104, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=844103680, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=845152256, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=787480576, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=788529152, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=789577728, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=790626304, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=791674880, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=846200832, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=847249408, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=792723456, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=848297984, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=849346560, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=793772032, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=850395136, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=851443712, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=852492288, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=794820608, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=795869184, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=796917760, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=853540864, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=797966336, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=854589440, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=799014912, buflen=1048576 00:22:24.923 fio: pid=80650, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=855638016, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=856686592, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=857735168, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=858783744, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=859832320, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=860880896, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=861929472, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=862978048, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=864026624, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=865075200, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=866123776, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: read offset=800063488, buflen=1048576 00:22:24.923 fio: io_u error on file /dev/sdb: Input/output error: write offset=867172352, buflen=1048576 00:22:24.923 00:22:24.923 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=80649: Wed Jul 24 05:11:38 2024 00:22:24.923 read: IOPS=115, BW=95.8MiB/s (100MB/s)(335MiB/3497msec) 00:22:24.923 slat (usec): min=35, max=62354, avg=3803.83, stdev=7568.71 00:22:24.923 clat (msec): min=215, max=753, avg=478.34, stdev=119.51 00:22:24.923 lat (msec): min=216, max=772, avg=481.96, stdev=120.56 00:22:24.923 clat percentiles (msec): 00:22:24.923 | 1.00th=[ 224], 5.00th=[ 288], 10.00th=[ 334], 20.00th=[ 372], 00:22:24.923 | 30.00th=[ 426], 40.00th=[ 460], 50.00th=[ 485], 60.00th=[ 502], 00:22:24.923 | 70.00th=[ 523], 80.00th=[ 575], 90.00th=[ 667], 95.00th=[ 701], 00:22:24.923 | 99.00th=[ 735], 99.50th=[ 751], 99.90th=[ 751], 99.95th=[ 751], 00:22:24.923 | 99.99th=[ 751] 00:22:24.923 bw ( KiB/s): min=92160, max=139543, per=71.53%, avg=110738.00, stdev=21381.06, samples=6 00:22:24.923 iops : min= 90, max= 136, avg=108.00, stdev=20.86, samples=6 00:22:24.923 write: IOPS=120, BW=104MiB/s (109MB/s)(362MiB/3497msec); 0 zone resets 00:22:24.923 slat (usec): min=61, max=27004, avg=3616.94, stdev=6325.29 00:22:24.923 clat (msec): min=216, max=816, avg=527.82, stdev=117.36 00:22:24.924 lat (msec): min=216, max=838, avg=531.44, stdev=118.54 00:22:24.924 clat percentiles (msec): 00:22:24.924 | 1.00th=[ 249], 5.00th=[ 351], 10.00th=[ 393], 20.00th=[ 439], 00:22:24.924 | 30.00th=[ 481], 40.00th=[ 498], 50.00th=[ 523], 60.00th=[ 550], 00:22:24.924 | 70.00th=[ 558], 80.00th=[ 634], 90.00th=[ 693], 95.00th=[ 726], 00:22:24.924 | 99.00th=[ 793], 99.50th=[ 818], 99.90th=[ 818], 99.95th=[ 818], 00:22:24.924 | 99.99th=[ 818] 00:22:24.924 bw ( KiB/s): min=77824, max=147751, per=71.34%, avg=118610.00, stdev=24320.29, samples=6 00:22:24.924 iops : min= 76, max= 144, avg=115.67, stdev=23.68, samples=6 00:22:24.924 lat (msec) : 250=2.42%, 500=39.39%, 750=41.21%, 1000=1.45% 00:22:24.924 cpu : usr=1.17%, sys=1.57%, ctx=465, majf=0, minf=2 00:22:24.924 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:22:24.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:24.924 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:24.924 issued rwts: total=403,422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:24.924 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:24.924 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=80650: Wed Jul 24 05:11:38 2024 00:22:24.924 read: IOPS=110, BW=103MiB/s (108MB/s)(707MiB/6892msec) 00:22:24.924 slat (usec): min=36, max=2905.6k, avg=6563.92, stdev=105620.81 00:22:24.924 clat (msec): min=117, max=3020, avg=336.87, stdev=257.73 00:22:24.924 lat (msec): min=117, max=3024, avg=339.79, stdev=257.47 00:22:24.924 clat percentiles (msec): 00:22:24.924 | 1.00th=[ 131], 5.00th=[ 178], 10.00th=[ 218], 20.00th=[ 257], 00:22:24.924 | 30.00th=[ 288], 40.00th=[ 305], 50.00th=[ 321], 60.00th=[ 347], 00:22:24.924 | 70.00th=[ 359], 80.00th=[ 368], 90.00th=[ 393], 95.00th=[ 418], 00:22:24.924 | 99.00th=[ 477], 99.50th=[ 3037], 99.90th=[ 3037], 99.95th=[ 3037], 00:22:24.924 | 99.99th=[ 3037] 00:22:24.924 bw ( KiB/s): min=104657, max=260617, per=100.00%, avg=179619.50, stdev=48812.22, samples=8 00:22:24.924 iops : min= 102, max= 254, avg=175.13, stdev=47.55, samples=8 00:22:24.924 write: IOPS=120, BW=110MiB/s (115MB/s)(757MiB/6892msec); 0 zone resets 00:22:24.924 slat (usec): min=62, max=33439, avg=2255.74, stdev=4852.62 00:22:24.924 clat (msec): min=154, max=3131, avg=428.21, stdev=406.91 00:22:24.924 lat (msec): min=154, max=3136, avg=430.58, stdev=407.38 00:22:24.924 clat percentiles (msec): 00:22:24.924 | 1.00th=[ 182], 5.00th=[ 245], 10.00th=[ 275], 20.00th=[ 321], 00:22:24.924 | 30.00th=[ 338], 40.00th=[ 355], 50.00th=[ 376], 60.00th=[ 384], 00:22:24.924 | 70.00th=[ 401], 80.00th=[ 426], 90.00th=[ 468], 95.00th=[ 498], 00:22:24.924 | 99.00th=[ 3071], 99.50th=[ 3104], 99.90th=[ 3138], 99.95th=[ 3138], 00:22:24.924 | 99.99th=[ 3138] 00:22:24.924 bw ( KiB/s): min=114917, max=231424, per=100.00%, avg=189607.50, stdev=40454.69, samples=8 00:22:24.924 iops : min= 112, max= 226, avg=184.88, stdev=39.46, samples=8 00:22:24.924 lat (msec) : 250=10.36%, 500=79.02%, 750=1.13%, >=2000=1.44% 00:22:24.924 cpu : usr=1.03%, sys=1.52%, ctx=697, majf=0, minf=1 00:22:24.924 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:22:24.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:24.924 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:24.924 issued rwts: total=764,828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:24.924 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:24.924 00:22:24.924 Run status group 0 (all jobs): 00:22:24.924 READ: bw=151MiB/s (159MB/s), 95.8MiB/s-103MiB/s (100MB/s-108MB/s), io=1042MiB (1093MB), run=3497-6892msec 00:22:24.924 WRITE: bw=162MiB/s (170MB/s), 104MiB/s-110MiB/s (109MB/s-115MB/s), io=1119MiB (1173MB), run=3497-6892msec 00:22:24.924 00:22:24.924 Disk stats (read/write): 00:22:24.924 sda: ios=426/412, merge=0/0, ticks=72040/97294, in_queue=169334, util=82.11% 00:22:24.924 sdb: ios=782/777, merge=0/0, ticks=89401/126951, in_queue=216352, util=94.02% 00:22:24.924 iscsi hotplug test: fio failed as expected 00:22:24.924 Cleaning up iSCSI connection 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # fio_status=2 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@134 -- # '[' 2 -eq 0 ']' 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@138 -- # echo 'iscsi hotplug test: fio failed as expected' 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@141 -- # iscsicleanup 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:22:24.924 Logging out of session [sid: 19, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:22:24.924 Logout of [sid: 19, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@983 -- # rm -rf 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2016-06.io.spdk:Target3 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@144 -- # delete_tmp_files 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@14 -- # rm -f /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/iscsi2.json 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@15 -- # rm -f ./local-job0-0-verify.state 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@16 -- # rm -f ./local-job1-1-verify.state 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@148 -- # killprocess 76779 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@948 -- # '[' -z 76779 ']' 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@952 -- # kill -0 76779 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@953 -- # uname 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76779 00:22:24.924 killing process with pid 76779 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76779' 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@967 -- # kill 76779 00:22:24.924 05:11:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@972 -- # wait 76779 00:22:27.458 05:11:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@150 -- # iscsitestfini 00:22:27.458 05:11:41 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:22:27.458 00:22:27.458 real 5m23.385s 00:22:27.458 user 3m43.387s 00:22:27.458 sys 1m52.382s 00:22:27.458 05:11:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:27.458 05:11:41 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:22:27.458 ************************************ 00:22:27.458 END TEST iscsi_tgt_fio 00:22:27.458 ************************************ 00:22:27.458 05:11:41 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@38 -- # run_test iscsi_tgt_qos /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:22:27.458 05:11:41 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:27.458 05:11:41 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:27.458 05:11:41 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:22:27.458 ************************************ 00:22:27.458 START TEST iscsi_tgt_qos 00:22:27.458 ************************************ 00:22:27.458 05:11:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:22:27.458 * Looking for test storage... 00:22:27.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@11 -- # iscsitestinit 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@44 -- # '[' -z 10.0.0.1 ']' 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@49 -- # '[' -z 10.0.0.2 ']' 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@54 -- # MALLOC_BDEV_SIZE=64 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@55 -- # MALLOC_BLOCK_SIZE=512 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@56 -- # IOPS_RESULT= 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@57 -- # BANDWIDTH_RESULT= 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@58 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@60 -- # timing_enter start_iscsi_tgt 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@63 -- # pid=80850 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@64 -- # echo 'Process pid: 80850' 00:22:27.458 Process pid: 80850 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@65 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@62 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@66 -- # waitforlisten 80850 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@829 -- # '[' -z 80850 ']' 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:27.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:27.458 05:11:42 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:27.717 [2024-07-24 05:11:42.152786] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:22:27.717 [2024-07-24 05:11:42.152952] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80850 ] 00:22:27.717 [2024-07-24 05:11:42.335520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.975 [2024-07-24 05:11:42.556835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.234 [2024-07-24 05:11:42.791647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@862 -- # return 0 00:22:29.170 iscsi_tgt is listening. Running tests... 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@67 -- # echo 'iscsi_tgt is listening. Running tests...' 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@69 -- # timing_exit start_iscsi_tgt 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@71 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@72 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@73 -- # rpc_cmd bdev_malloc_create 64 512 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:29.170 Malloc0 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@78 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias Malloc0:0 1:2 64 -d 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.170 05:11:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@79 -- # sleep 1 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@81 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:22:30.104 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@82 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:22:30.104 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:22:30.104 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@84 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@87 -- # run_fio Malloc0 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:30.104 [2024-07-24 05:11:44.663115] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:22:30.104 "tick_rate": 2100000000, 00:22:30.104 "ticks": 2689356440888, 00:22:30.104 "bdevs": [ 00:22:30.104 { 00:22:30.104 "name": "Malloc0", 00:22:30.104 "bytes_read": 37376, 00:22:30.104 "num_read_ops": 3, 00:22:30.104 "bytes_written": 0, 00:22:30.104 "num_write_ops": 0, 00:22:30.104 "bytes_unmapped": 0, 00:22:30.104 "num_unmap_ops": 0, 00:22:30.104 "bytes_copied": 0, 00:22:30.104 "num_copy_ops": 0, 00:22:30.104 "read_latency_ticks": 1194294, 00:22:30.104 "max_read_latency_ticks": 501938, 00:22:30.104 "min_read_latency_ticks": 298420, 00:22:30.104 "write_latency_ticks": 0, 00:22:30.104 "max_write_latency_ticks": 0, 00:22:30.104 "min_write_latency_ticks": 0, 00:22:30.104 "unmap_latency_ticks": 0, 00:22:30.104 "max_unmap_latency_ticks": 0, 00:22:30.104 "min_unmap_latency_ticks": 0, 00:22:30.104 "copy_latency_ticks": 0, 00:22:30.104 "max_copy_latency_ticks": 0, 00:22:30.104 "min_copy_latency_ticks": 0, 00:22:30.104 "io_error": {} 00:22:30.104 } 00:22:30.104 ] 00:22:30.104 }' 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=3 00:22:30.104 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:22:30.362 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=37376 00:22:30.362 05:11:44 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:22:30.362 [global] 00:22:30.362 thread=1 00:22:30.362 invalidate=1 00:22:30.362 rw=randread 00:22:30.362 time_based=1 00:22:30.362 runtime=5 00:22:30.362 ioengine=libaio 00:22:30.362 direct=1 00:22:30.362 bs=1024 00:22:30.362 iodepth=128 00:22:30.362 norandommap=1 00:22:30.362 numjobs=1 00:22:30.362 00:22:30.362 [job0] 00:22:30.362 filename=/dev/sda 00:22:30.362 queue_depth set to 113 (sda) 00:22:30.362 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:22:30.362 fio-3.35 00:22:30.362 Starting 1 thread 00:22:35.680 00:22:35.680 job0: (groupid=0, jobs=1): err= 0: pid=80941: Wed Jul 24 05:11:50 2024 00:22:35.680 read: IOPS=50.1k, BW=48.9MiB/s (51.3MB/s)(245MiB/5003msec) 00:22:35.680 slat (nsec): min=1902, max=4267.2k, avg=18566.29, stdev=54263.67 00:22:35.680 clat (usec): min=903, max=8479, avg=2537.31, stdev=149.95 00:22:35.680 lat (usec): min=908, max=8482, avg=2555.88, stdev=141.02 00:22:35.680 clat percentiles (usec): 00:22:35.680 | 1.00th=[ 2278], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2507], 00:22:35.680 | 30.00th=[ 2540], 40.00th=[ 2540], 50.00th=[ 2540], 60.00th=[ 2540], 00:22:35.680 | 70.00th=[ 2573], 80.00th=[ 2573], 90.00th=[ 2573], 95.00th=[ 2606], 00:22:35.680 | 99.00th=[ 2704], 99.50th=[ 2802], 99.90th=[ 4228], 99.95th=[ 5080], 00:22:35.680 | 99.99th=[ 8356] 00:22:35.680 bw ( KiB/s): min=49636, max=50336, per=100.00%, avg=50134.22, stdev=207.90, samples=9 00:22:35.680 iops : min=49636, max=50336, avg=50134.22, stdev=207.90, samples=9 00:22:35.680 lat (usec) : 1000=0.01% 00:22:35.680 lat (msec) : 2=0.11%, 4=99.78%, 10=0.11% 00:22:35.680 cpu : usr=7.58%, sys=17.01%, ctx=196230, majf=0, minf=32 00:22:35.680 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:22:35.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:35.680 issued rwts: total=250436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.680 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:35.680 00:22:35.680 Run status group 0 (all jobs): 00:22:35.680 READ: bw=48.9MiB/s (51.3MB/s), 48.9MiB/s-48.9MiB/s (51.3MB/s-51.3MB/s), io=245MiB (256MB), run=5003-5003msec 00:22:35.680 00:22:35.680 Disk stats (read/write): 00:22:35.680 sda: ios=244786/0, merge=0/0, ticks=533183/0, in_queue=533183, util=98.11% 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:22:35.680 "tick_rate": 2100000000, 00:22:35.680 "ticks": 2700806017872, 00:22:35.680 "bdevs": [ 00:22:35.680 { 00:22:35.680 "name": "Malloc0", 00:22:35.680 "bytes_read": 257556992, 00:22:35.680 "num_read_ops": 250493, 00:22:35.680 "bytes_written": 0, 00:22:35.680 "num_write_ops": 0, 00:22:35.680 "bytes_unmapped": 0, 00:22:35.680 "num_unmap_ops": 0, 00:22:35.680 "bytes_copied": 0, 00:22:35.680 "num_copy_ops": 0, 00:22:35.680 "read_latency_ticks": 56265603026, 00:22:35.680 "max_read_latency_ticks": 2522672, 00:22:35.680 "min_read_latency_ticks": 13178, 00:22:35.680 "write_latency_ticks": 0, 00:22:35.680 "max_write_latency_ticks": 0, 00:22:35.680 "min_write_latency_ticks": 0, 00:22:35.680 "unmap_latency_ticks": 0, 00:22:35.680 "max_unmap_latency_ticks": 0, 00:22:35.680 "min_unmap_latency_ticks": 0, 00:22:35.680 "copy_latency_ticks": 0, 00:22:35.680 "max_copy_latency_ticks": 0, 00:22:35.680 "min_copy_latency_ticks": 0, 00:22:35.680 "io_error": {} 00:22:35.680 } 00:22:35.680 ] 00:22:35.680 }' 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=250493 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=257556992 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=50098 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=51503923 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@90 -- # IOPS_LIMIT=25049 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@91 -- # BANDWIDTH_LIMIT=25751961 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@94 -- # READ_BANDWIDTH_LIMIT=12875980 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@98 -- # IOPS_LIMIT=25000 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@99 -- # BANDWIDTH_LIMIT_MB=24 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@100 -- # BANDWIDTH_LIMIT=25165824 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@101 -- # READ_BANDWIDTH_LIMIT_MB=12 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@102 -- # READ_BANDWIDTH_LIMIT=12582912 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@105 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 25000 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@106 -- # run_fio Malloc0 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:22:35.680 "tick_rate": 2100000000, 00:22:35.680 "ticks": 2701049656392, 00:22:35.680 "bdevs": [ 00:22:35.680 { 00:22:35.680 "name": "Malloc0", 00:22:35.680 "bytes_read": 257556992, 00:22:35.680 "num_read_ops": 250493, 00:22:35.680 "bytes_written": 0, 00:22:35.680 "num_write_ops": 0, 00:22:35.680 "bytes_unmapped": 0, 00:22:35.680 "num_unmap_ops": 0, 00:22:35.680 "bytes_copied": 0, 00:22:35.680 "num_copy_ops": 0, 00:22:35.680 "read_latency_ticks": 56265603026, 00:22:35.680 "max_read_latency_ticks": 2522672, 00:22:35.680 "min_read_latency_ticks": 13178, 00:22:35.680 "write_latency_ticks": 0, 00:22:35.680 "max_write_latency_ticks": 0, 00:22:35.680 "min_write_latency_ticks": 0, 00:22:35.680 "unmap_latency_ticks": 0, 00:22:35.680 "max_unmap_latency_ticks": 0, 00:22:35.680 "min_unmap_latency_ticks": 0, 00:22:35.680 "copy_latency_ticks": 0, 00:22:35.680 "max_copy_latency_ticks": 0, 00:22:35.680 "min_copy_latency_ticks": 0, 00:22:35.680 "io_error": {} 00:22:35.680 } 00:22:35.680 ] 00:22:35.680 }' 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=250493 00:22:35.680 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:22:35.939 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=257556992 00:22:35.939 05:11:50 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:22:35.939 [global] 00:22:35.939 thread=1 00:22:35.939 invalidate=1 00:22:35.939 rw=randread 00:22:35.939 time_based=1 00:22:35.939 runtime=5 00:22:35.939 ioengine=libaio 00:22:35.939 direct=1 00:22:35.939 bs=1024 00:22:35.939 iodepth=128 00:22:35.939 norandommap=1 00:22:35.939 numjobs=1 00:22:35.939 00:22:35.939 [job0] 00:22:35.939 filename=/dev/sda 00:22:35.939 queue_depth set to 113 (sda) 00:22:35.939 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:22:35.939 fio-3.35 00:22:35.939 Starting 1 thread 00:22:41.209 00:22:41.209 job0: (groupid=0, jobs=1): err= 0: pid=81034: Wed Jul 24 05:11:55 2024 00:22:41.209 read: IOPS=25.0k, BW=24.4MiB/s (25.6MB/s)(122MiB/5005msec) 00:22:41.209 slat (usec): min=3, max=3476, avg=37.52, stdev=148.38 00:22:41.209 clat (usec): min=1710, max=9592, avg=5082.45, stdev=286.18 00:22:41.209 lat (usec): min=1719, max=9601, avg=5119.96, stdev=316.63 00:22:41.209 clat percentiles (usec): 00:22:41.209 | 1.00th=[ 4293], 5.00th=[ 4883], 10.00th=[ 4948], 20.00th=[ 5014], 00:22:41.209 | 30.00th=[ 5014], 40.00th=[ 5014], 50.00th=[ 5014], 60.00th=[ 5080], 00:22:41.209 | 70.00th=[ 5080], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5735], 00:22:41.209 | 99.00th=[ 5932], 99.50th=[ 5932], 99.90th=[ 7635], 99.95th=[ 8029], 00:22:41.209 | 99.99th=[ 8848] 00:22:41.209 bw ( KiB/s): min=24922, max=25050, per=100.00%, avg=25017.78, stdev=44.56, samples=9 00:22:41.209 iops : min=24922, max=25050, avg=25017.78, stdev=44.56, samples=9 00:22:41.209 lat (msec) : 2=0.01%, 4=0.24%, 10=99.76% 00:22:41.209 cpu : usr=6.31%, sys=13.99%, ctx=67709, majf=0, minf=32 00:22:41.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:41.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:41.209 issued rwts: total=125083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:41.209 00:22:41.209 Run status group 0 (all jobs): 00:22:41.209 READ: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=122MiB (128MB), run=5005-5005msec 00:22:41.209 00:22:41.209 Disk stats (read/write): 00:22:41.209 sda: ios=122294/0, merge=0/0, ticks=530836/0, in_queue=530836, util=98.12% 00:22:41.209 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:22:41.209 05:11:55 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.209 05:11:55 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:41.209 05:11:55 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.209 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:22:41.209 "tick_rate": 2100000000, 00:22:41.209 "ticks": 2712435961824, 00:22:41.209 "bdevs": [ 00:22:41.209 { 00:22:41.209 "name": "Malloc0", 00:22:41.209 "bytes_read": 385641984, 00:22:41.209 "num_read_ops": 375576, 00:22:41.209 "bytes_written": 0, 00:22:41.209 "num_write_ops": 0, 00:22:41.209 "bytes_unmapped": 0, 00:22:41.209 "num_unmap_ops": 0, 00:22:41.210 "bytes_copied": 0, 00:22:41.210 "num_copy_ops": 0, 00:22:41.210 "read_latency_ticks": 606132015348, 00:22:41.210 "max_read_latency_ticks": 11811074, 00:22:41.210 "min_read_latency_ticks": 13178, 00:22:41.210 "write_latency_ticks": 0, 00:22:41.210 "max_write_latency_ticks": 0, 00:22:41.210 "min_write_latency_ticks": 0, 00:22:41.210 "unmap_latency_ticks": 0, 00:22:41.210 "max_unmap_latency_ticks": 0, 00:22:41.210 "min_unmap_latency_ticks": 0, 00:22:41.210 "copy_latency_ticks": 0, 00:22:41.210 "max_copy_latency_ticks": 0, 00:22:41.210 "min_copy_latency_ticks": 0, 00:22:41.210 "io_error": {} 00:22:41.210 } 00:22:41.210 ] 00:22:41.210 }' 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=375576 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=385641984 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=25016 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=25616998 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@107 -- # verify_qos_limits 25016 25000 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=25016 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=25000 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@110 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@111 -- # run_fio Malloc0 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:22:41.210 "tick_rate": 2100000000, 00:22:41.210 "ticks": 2712687897176, 00:22:41.210 "bdevs": [ 00:22:41.210 { 00:22:41.210 "name": "Malloc0", 00:22:41.210 "bytes_read": 385641984, 00:22:41.210 "num_read_ops": 375576, 00:22:41.210 "bytes_written": 0, 00:22:41.210 "num_write_ops": 0, 00:22:41.210 "bytes_unmapped": 0, 00:22:41.210 "num_unmap_ops": 0, 00:22:41.210 "bytes_copied": 0, 00:22:41.210 "num_copy_ops": 0, 00:22:41.210 "read_latency_ticks": 606132015348, 00:22:41.210 "max_read_latency_ticks": 11811074, 00:22:41.210 "min_read_latency_ticks": 13178, 00:22:41.210 "write_latency_ticks": 0, 00:22:41.210 "max_write_latency_ticks": 0, 00:22:41.210 "min_write_latency_ticks": 0, 00:22:41.210 "unmap_latency_ticks": 0, 00:22:41.210 "max_unmap_latency_ticks": 0, 00:22:41.210 "min_unmap_latency_ticks": 0, 00:22:41.210 "copy_latency_ticks": 0, 00:22:41.210 "max_copy_latency_ticks": 0, 00:22:41.210 "min_copy_latency_ticks": 0, 00:22:41.210 "io_error": {} 00:22:41.210 } 00:22:41.210 ] 00:22:41.210 }' 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=375576 00:22:41.210 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:22:41.469 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=385641984 00:22:41.469 05:11:55 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:22:41.469 [global] 00:22:41.469 thread=1 00:22:41.469 invalidate=1 00:22:41.469 rw=randread 00:22:41.469 time_based=1 00:22:41.469 runtime=5 00:22:41.469 ioengine=libaio 00:22:41.469 direct=1 00:22:41.469 bs=1024 00:22:41.469 iodepth=128 00:22:41.469 norandommap=1 00:22:41.469 numjobs=1 00:22:41.469 00:22:41.469 [job0] 00:22:41.469 filename=/dev/sda 00:22:41.469 queue_depth set to 113 (sda) 00:22:41.469 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:22:41.469 fio-3.35 00:22:41.469 Starting 1 thread 00:22:46.739 00:22:46.739 job0: (groupid=0, jobs=1): err= 0: pid=81122: Wed Jul 24 05:12:01 2024 00:22:46.739 read: IOPS=50.2k, BW=49.0MiB/s (51.4MB/s)(245MiB/5002msec) 00:22:46.739 slat (nsec): min=1900, max=997376, avg=18526.11, stdev=51850.07 00:22:46.739 clat (usec): min=1058, max=4517, avg=2529.53, stdev=73.32 00:22:46.739 lat (usec): min=1069, max=4519, avg=2548.06, stdev=52.60 00:22:46.739 clat percentiles (usec): 00:22:46.739 | 1.00th=[ 2311], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2507], 00:22:46.739 | 30.00th=[ 2540], 40.00th=[ 2540], 50.00th=[ 2540], 60.00th=[ 2540], 00:22:46.739 | 70.00th=[ 2540], 80.00th=[ 2573], 90.00th=[ 2573], 95.00th=[ 2606], 00:22:46.739 | 99.00th=[ 2638], 99.50th=[ 2704], 99.90th=[ 2966], 99.95th=[ 3130], 00:22:46.739 | 99.99th=[ 4113] 00:22:46.739 bw ( KiB/s): min=49984, max=50580, per=100.00%, avg=50292.89, stdev=154.63, samples=9 00:22:46.739 iops : min=49984, max=50580, avg=50293.11, stdev=154.69, samples=9 00:22:46.739 lat (msec) : 2=0.06%, 4=99.92%, 10=0.01% 00:22:46.739 cpu : usr=6.68%, sys=17.96%, ctx=225060, majf=0, minf=32 00:22:46.739 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:22:46.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:46.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:46.739 issued rwts: total=251218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:46.739 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:46.739 00:22:46.739 Run status group 0 (all jobs): 00:22:46.739 READ: bw=49.0MiB/s (51.4MB/s), 49.0MiB/s-49.0MiB/s (51.4MB/s-51.4MB/s), io=245MiB (257MB), run=5002-5002msec 00:22:46.739 00:22:46.739 Disk stats (read/write): 00:22:46.739 sda: ios=245520/0, merge=0/0, ticks=534231/0, in_queue=534231, util=98.13% 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:22:46.739 "tick_rate": 2100000000, 00:22:46.739 "ticks": 2724120331486, 00:22:46.739 "bdevs": [ 00:22:46.739 { 00:22:46.739 "name": "Malloc0", 00:22:46.739 "bytes_read": 642889216, 00:22:46.739 "num_read_ops": 626794, 00:22:46.739 "bytes_written": 0, 00:22:46.739 "num_write_ops": 0, 00:22:46.739 "bytes_unmapped": 0, 00:22:46.739 "num_unmap_ops": 0, 00:22:46.739 "bytes_copied": 0, 00:22:46.739 "num_copy_ops": 0, 00:22:46.739 "read_latency_ticks": 662457611564, 00:22:46.739 "max_read_latency_ticks": 11811074, 00:22:46.739 "min_read_latency_ticks": 13178, 00:22:46.739 "write_latency_ticks": 0, 00:22:46.739 "max_write_latency_ticks": 0, 00:22:46.739 "min_write_latency_ticks": 0, 00:22:46.739 "unmap_latency_ticks": 0, 00:22:46.739 "max_unmap_latency_ticks": 0, 00:22:46.739 "min_unmap_latency_ticks": 0, 00:22:46.739 "copy_latency_ticks": 0, 00:22:46.739 "max_copy_latency_ticks": 0, 00:22:46.739 "min_copy_latency_ticks": 0, 00:22:46.739 "io_error": {} 00:22:46.739 } 00:22:46.739 ] 00:22:46.739 }' 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=626794 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=642889216 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=50243 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=51449446 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@112 -- # '[' 50243 -gt 25000 ']' 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@115 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 25000 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@116 -- # run_fio Malloc0 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:22:46.739 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:22:46.740 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:22:46.740 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:22:46.740 05:12:01 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.740 05:12:01 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:46.740 05:12:01 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.740 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:22:46.740 "tick_rate": 2100000000, 00:22:46.740 "ticks": 2724372389624, 00:22:46.740 "bdevs": [ 00:22:46.740 { 00:22:46.740 "name": "Malloc0", 00:22:46.740 "bytes_read": 642889216, 00:22:46.740 "num_read_ops": 626794, 00:22:46.740 "bytes_written": 0, 00:22:46.740 "num_write_ops": 0, 00:22:46.740 "bytes_unmapped": 0, 00:22:46.740 "num_unmap_ops": 0, 00:22:46.740 "bytes_copied": 0, 00:22:46.740 "num_copy_ops": 0, 00:22:46.740 "read_latency_ticks": 662457611564, 00:22:46.740 "max_read_latency_ticks": 11811074, 00:22:46.740 "min_read_latency_ticks": 13178, 00:22:46.740 "write_latency_ticks": 0, 00:22:46.740 "max_write_latency_ticks": 0, 00:22:46.740 "min_write_latency_ticks": 0, 00:22:46.740 "unmap_latency_ticks": 0, 00:22:46.740 "max_unmap_latency_ticks": 0, 00:22:46.740 "min_unmap_latency_ticks": 0, 00:22:46.740 "copy_latency_ticks": 0, 00:22:46.740 "max_copy_latency_ticks": 0, 00:22:46.740 "min_copy_latency_ticks": 0, 00:22:46.740 "io_error": {} 00:22:46.740 } 00:22:46.740 ] 00:22:46.740 }' 00:22:46.740 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.998 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=626794 00:22:46.998 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:22:46.998 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=642889216 00:22:46.998 05:12:01 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:22:46.998 [global] 00:22:46.998 thread=1 00:22:46.998 invalidate=1 00:22:46.998 rw=randread 00:22:46.998 time_based=1 00:22:46.998 runtime=5 00:22:46.998 ioengine=libaio 00:22:46.998 direct=1 00:22:46.998 bs=1024 00:22:46.998 iodepth=128 00:22:46.998 norandommap=1 00:22:46.998 numjobs=1 00:22:46.998 00:22:46.998 [job0] 00:22:46.998 filename=/dev/sda 00:22:46.998 queue_depth set to 113 (sda) 00:22:46.998 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:22:46.998 fio-3.35 00:22:46.998 Starting 1 thread 00:22:52.262 00:22:52.262 job0: (groupid=0, jobs=1): err= 0: pid=81209: Wed Jul 24 05:12:06 2024 00:22:52.262 read: IOPS=25.0k, BW=24.4MiB/s (25.6MB/s)(122MiB/5005msec) 00:22:52.262 slat (usec): min=3, max=2031, avg=37.51, stdev=148.37 00:22:52.262 clat (usec): min=1394, max=9044, avg=5080.51, stdev=273.78 00:22:52.262 lat (usec): min=1407, max=9047, avg=5118.02, stdev=305.34 00:22:52.262 clat percentiles (usec): 00:22:52.262 | 1.00th=[ 4293], 5.00th=[ 4883], 10.00th=[ 4948], 20.00th=[ 5014], 00:22:52.262 | 30.00th=[ 5014], 40.00th=[ 5014], 50.00th=[ 5014], 60.00th=[ 5080], 00:22:52.262 | 70.00th=[ 5080], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5735], 00:22:52.262 | 99.00th=[ 5932], 99.50th=[ 5932], 99.90th=[ 6063], 99.95th=[ 6587], 00:22:52.262 | 99.99th=[ 8094] 00:22:52.262 bw ( KiB/s): min=25000, max=25050, per=100.00%, avg=25024.56, stdev=22.95, samples=9 00:22:52.262 iops : min=25000, max=25050, avg=25024.56, stdev=22.95, samples=9 00:22:52.262 lat (msec) : 2=0.05%, 4=0.07%, 10=99.88% 00:22:52.262 cpu : usr=6.67%, sys=13.59%, ctx=67204, majf=0, minf=32 00:22:52.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:52.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:52.262 issued rwts: total=125131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:52.262 00:22:52.262 Run status group 0 (all jobs): 00:22:52.262 READ: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=122MiB (128MB), run=5005-5005msec 00:22:52.262 00:22:52.262 Disk stats (read/write): 00:22:52.262 sda: ios=122250/0, merge=0/0, ticks=530348/0, in_queue=530348, util=98.15% 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:22:52.262 "tick_rate": 2100000000, 00:22:52.262 "ticks": 2735801088840, 00:22:52.262 "bdevs": [ 00:22:52.262 { 00:22:52.262 "name": "Malloc0", 00:22:52.262 "bytes_read": 771023360, 00:22:52.262 "num_read_ops": 751925, 00:22:52.262 "bytes_written": 0, 00:22:52.262 "num_write_ops": 0, 00:22:52.262 "bytes_unmapped": 0, 00:22:52.262 "num_unmap_ops": 0, 00:22:52.262 "bytes_copied": 0, 00:22:52.262 "num_copy_ops": 0, 00:22:52.262 "read_latency_ticks": 1213303551344, 00:22:52.262 "max_read_latency_ticks": 11811074, 00:22:52.262 "min_read_latency_ticks": 13178, 00:22:52.262 "write_latency_ticks": 0, 00:22:52.262 "max_write_latency_ticks": 0, 00:22:52.262 "min_write_latency_ticks": 0, 00:22:52.262 "unmap_latency_ticks": 0, 00:22:52.262 "max_unmap_latency_ticks": 0, 00:22:52.262 "min_unmap_latency_ticks": 0, 00:22:52.262 "copy_latency_ticks": 0, 00:22:52.262 "max_copy_latency_ticks": 0, 00:22:52.262 "min_copy_latency_ticks": 0, 00:22:52.262 "io_error": {} 00:22:52.262 } 00:22:52.262 ] 00:22:52.262 }' 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=751925 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=771023360 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=25026 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=25626828 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@117 -- # verify_qos_limits 25026 25000 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=25026 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=25000 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:22:52.262 I/O rate limiting tests successful 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@119 -- # echo 'I/O rate limiting tests successful' 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@122 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 --rw_mbytes_per_sec 24 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.262 05:12:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:52.521 05:12:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.521 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@123 -- # run_fio Malloc0 00:22:52.521 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:22:52.521 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:22:52.521 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:22:52.521 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:22:52.522 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:22:52.522 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:22:52.522 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:22:52.522 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:22:52.522 05:12:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.522 05:12:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:52.522 05:12:06 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.522 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:22:52.522 "tick_rate": 2100000000, 00:22:52.522 "ticks": 2736070062232, 00:22:52.522 "bdevs": [ 00:22:52.522 { 00:22:52.522 "name": "Malloc0", 00:22:52.522 "bytes_read": 771023360, 00:22:52.522 "num_read_ops": 751925, 00:22:52.522 "bytes_written": 0, 00:22:52.522 "num_write_ops": 0, 00:22:52.522 "bytes_unmapped": 0, 00:22:52.522 "num_unmap_ops": 0, 00:22:52.522 "bytes_copied": 0, 00:22:52.522 "num_copy_ops": 0, 00:22:52.522 "read_latency_ticks": 1213303551344, 00:22:52.522 "max_read_latency_ticks": 11811074, 00:22:52.522 "min_read_latency_ticks": 13178, 00:22:52.522 "write_latency_ticks": 0, 00:22:52.522 "max_write_latency_ticks": 0, 00:22:52.522 "min_write_latency_ticks": 0, 00:22:52.522 "unmap_latency_ticks": 0, 00:22:52.522 "max_unmap_latency_ticks": 0, 00:22:52.522 "min_unmap_latency_ticks": 0, 00:22:52.522 "copy_latency_ticks": 0, 00:22:52.522 "max_copy_latency_ticks": 0, 00:22:52.522 "min_copy_latency_ticks": 0, 00:22:52.522 "io_error": {} 00:22:52.522 } 00:22:52.522 ] 00:22:52.522 }' 00:22:52.522 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.522 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=751925 00:22:52.522 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:22:52.522 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=771023360 00:22:52.522 05:12:06 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:22:52.522 [global] 00:22:52.522 thread=1 00:22:52.522 invalidate=1 00:22:52.522 rw=randread 00:22:52.522 time_based=1 00:22:52.522 runtime=5 00:22:52.522 ioengine=libaio 00:22:52.522 direct=1 00:22:52.522 bs=1024 00:22:52.522 iodepth=128 00:22:52.522 norandommap=1 00:22:52.522 numjobs=1 00:22:52.522 00:22:52.522 [job0] 00:22:52.522 filename=/dev/sda 00:22:52.522 queue_depth set to 113 (sda) 00:22:52.781 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:22:52.781 fio-3.35 00:22:52.781 Starting 1 thread 00:22:58.105 00:22:58.105 job0: (groupid=0, jobs=1): err= 0: pid=81297: Wed Jul 24 05:12:12 2024 00:22:58.105 read: IOPS=24.6k, BW=24.0MiB/s (25.2MB/s)(120MiB/5005msec) 00:22:58.105 slat (usec): min=3, max=1547, avg=38.18, stdev=151.05 00:22:58.105 clat (usec): min=1484, max=9760, avg=5169.03, stdev=347.16 00:22:58.105 lat (usec): min=1494, max=9764, avg=5207.21, stdev=367.31 00:22:58.105 clat percentiles (usec): 00:22:58.105 | 1.00th=[ 4490], 5.00th=[ 4686], 10.00th=[ 4883], 20.00th=[ 4948], 00:22:58.105 | 30.00th=[ 5014], 40.00th=[ 5080], 50.00th=[ 5080], 60.00th=[ 5145], 00:22:58.105 | 70.00th=[ 5211], 80.00th=[ 5276], 90.00th=[ 5800], 95.00th=[ 5932], 00:22:58.105 | 99.00th=[ 5932], 99.50th=[ 5997], 99.90th=[ 6325], 99.95th=[ 6849], 00:22:58.105 | 99.99th=[ 8848] 00:22:58.105 bw ( KiB/s): min=24574, max=24624, per=100.00%, avg=24599.56, stdev=21.67, samples=9 00:22:58.105 iops : min=24574, max=24624, avg=24599.56, stdev=21.67, samples=9 00:22:58.105 lat (msec) : 2=0.03%, 4=0.08%, 10=99.89% 00:22:58.105 cpu : usr=6.24%, sys=13.87%, ctx=65620, majf=0, minf=32 00:22:58.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:58.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:58.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:58.105 issued rwts: total=122986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:58.105 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:58.105 00:22:58.105 Run status group 0 (all jobs): 00:22:58.105 READ: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=120MiB (126MB), run=5005-5005msec 00:22:58.105 00:22:58.105 Disk stats (read/write): 00:22:58.105 sda: ios=120173/0, merge=0/0, ticks=530652/0, in_queue=530652, util=98.11% 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:22:58.105 "tick_rate": 2100000000, 00:22:58.105 "ticks": 2747443444602, 00:22:58.105 "bdevs": [ 00:22:58.105 { 00:22:58.105 "name": "Malloc0", 00:22:58.105 "bytes_read": 896961024, 00:22:58.105 "num_read_ops": 874911, 00:22:58.105 "bytes_written": 0, 00:22:58.105 "num_write_ops": 0, 00:22:58.105 "bytes_unmapped": 0, 00:22:58.105 "num_unmap_ops": 0, 00:22:58.105 "bytes_copied": 0, 00:22:58.105 "num_copy_ops": 0, 00:22:58.105 "read_latency_ticks": 1746699876580, 00:22:58.105 "max_read_latency_ticks": 11811074, 00:22:58.105 "min_read_latency_ticks": 13178, 00:22:58.105 "write_latency_ticks": 0, 00:22:58.105 "max_write_latency_ticks": 0, 00:22:58.105 "min_write_latency_ticks": 0, 00:22:58.105 "unmap_latency_ticks": 0, 00:22:58.105 "max_unmap_latency_ticks": 0, 00:22:58.105 "min_unmap_latency_ticks": 0, 00:22:58.105 "copy_latency_ticks": 0, 00:22:58.105 "max_copy_latency_ticks": 0, 00:22:58.105 "min_copy_latency_ticks": 0, 00:22:58.105 "io_error": {} 00:22:58.105 } 00:22:58.105 ] 00:22:58.105 }' 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=874911 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=896961024 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=24597 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=25187532 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@124 -- # verify_qos_limits 25187532 25165824 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=25187532 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=25165824 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@127 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 0 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@128 -- # run_fio Malloc0 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:22:58.105 "tick_rate": 2100000000, 00:22:58.105 "ticks": 2747712481324, 00:22:58.105 "bdevs": [ 00:22:58.105 { 00:22:58.105 "name": "Malloc0", 00:22:58.105 "bytes_read": 896961024, 00:22:58.105 "num_read_ops": 874911, 00:22:58.105 "bytes_written": 0, 00:22:58.105 "num_write_ops": 0, 00:22:58.105 "bytes_unmapped": 0, 00:22:58.105 "num_unmap_ops": 0, 00:22:58.105 "bytes_copied": 0, 00:22:58.105 "num_copy_ops": 0, 00:22:58.105 "read_latency_ticks": 1746699876580, 00:22:58.105 "max_read_latency_ticks": 11811074, 00:22:58.105 "min_read_latency_ticks": 13178, 00:22:58.105 "write_latency_ticks": 0, 00:22:58.105 "max_write_latency_ticks": 0, 00:22:58.105 "min_write_latency_ticks": 0, 00:22:58.105 "unmap_latency_ticks": 0, 00:22:58.105 "max_unmap_latency_ticks": 0, 00:22:58.105 "min_unmap_latency_ticks": 0, 00:22:58.105 "copy_latency_ticks": 0, 00:22:58.105 "max_copy_latency_ticks": 0, 00:22:58.105 "min_copy_latency_ticks": 0, 00:22:58.105 "io_error": {} 00:22:58.105 } 00:22:58.105 ] 00:22:58.105 }' 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=874911 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=896961024 00:22:58.105 05:12:12 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:22:58.105 [global] 00:22:58.105 thread=1 00:22:58.105 invalidate=1 00:22:58.105 rw=randread 00:22:58.105 time_based=1 00:22:58.105 runtime=5 00:22:58.105 ioengine=libaio 00:22:58.105 direct=1 00:22:58.105 bs=1024 00:22:58.105 iodepth=128 00:22:58.105 norandommap=1 00:22:58.105 numjobs=1 00:22:58.105 00:22:58.105 [job0] 00:22:58.105 filename=/dev/sda 00:22:58.105 queue_depth set to 113 (sda) 00:22:58.105 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:22:58.105 fio-3.35 00:22:58.105 Starting 1 thread 00:23:03.371 00:23:03.371 job0: (groupid=0, jobs=1): err= 0: pid=81382: Wed Jul 24 05:12:17 2024 00:23:03.371 read: IOPS=50.3k, BW=49.2MiB/s (51.6MB/s)(246MiB/5003msec) 00:23:03.371 slat (nsec): min=1905, max=2214.1k, avg=18273.75, stdev=53608.44 00:23:03.371 clat (usec): min=821, max=8259, avg=2523.06, stdev=215.34 00:23:03.371 lat (usec): min=825, max=8306, avg=2541.33, stdev=210.07 00:23:03.371 clat percentiles (usec): 00:23:03.371 | 1.00th=[ 2245], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2474], 00:23:03.371 | 30.00th=[ 2507], 40.00th=[ 2507], 50.00th=[ 2507], 60.00th=[ 2540], 00:23:03.371 | 70.00th=[ 2540], 80.00th=[ 2540], 90.00th=[ 2573], 95.00th=[ 2606], 00:23:03.371 | 99.00th=[ 2835], 99.50th=[ 3785], 99.90th=[ 5866], 99.95th=[ 6325], 00:23:03.371 | 99.99th=[ 7242] 00:23:03.371 bw ( KiB/s): min=48318, max=50746, per=99.99%, avg=50338.44, stdev=763.48, samples=9 00:23:03.371 iops : min=48318, max=50746, avg=50338.44, stdev=763.48, samples=9 00:23:03.371 lat (usec) : 1000=0.01% 00:23:03.372 lat (msec) : 2=0.30%, 4=99.23%, 10=0.46% 00:23:03.372 cpu : usr=8.04%, sys=18.89%, ctx=148519, majf=0, minf=32 00:23:03.372 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:23:03.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:03.372 issued rwts: total=251867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:03.372 00:23:03.372 Run status group 0 (all jobs): 00:23:03.372 READ: bw=49.2MiB/s (51.6MB/s), 49.2MiB/s-49.2MiB/s (51.6MB/s-51.6MB/s), io=246MiB (258MB), run=5003-5003msec 00:23:03.372 00:23:03.372 Disk stats (read/write): 00:23:03.372 sda: ios=246105/0, merge=0/0, ticks=522221/0, in_queue=522221, util=98.05% 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:23:03.372 "tick_rate": 2100000000, 00:23:03.372 "ticks": 2759112269472, 00:23:03.372 "bdevs": [ 00:23:03.372 { 00:23:03.372 "name": "Malloc0", 00:23:03.372 "bytes_read": 1154872832, 00:23:03.372 "num_read_ops": 1126778, 00:23:03.372 "bytes_written": 0, 00:23:03.372 "num_write_ops": 0, 00:23:03.372 "bytes_unmapped": 0, 00:23:03.372 "num_unmap_ops": 0, 00:23:03.372 "bytes_copied": 0, 00:23:03.372 "num_copy_ops": 0, 00:23:03.372 "read_latency_ticks": 1802732799758, 00:23:03.372 "max_read_latency_ticks": 11811074, 00:23:03.372 "min_read_latency_ticks": 12650, 00:23:03.372 "write_latency_ticks": 0, 00:23:03.372 "max_write_latency_ticks": 0, 00:23:03.372 "min_write_latency_ticks": 0, 00:23:03.372 "unmap_latency_ticks": 0, 00:23:03.372 "max_unmap_latency_ticks": 0, 00:23:03.372 "min_unmap_latency_ticks": 0, 00:23:03.372 "copy_latency_ticks": 0, 00:23:03.372 "max_copy_latency_ticks": 0, 00:23:03.372 "min_copy_latency_ticks": 0, 00:23:03.372 "io_error": {} 00:23:03.372 } 00:23:03.372 ] 00:23:03.372 }' 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=1126778 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=1154872832 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=50373 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=51582361 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@129 -- # '[' 51582361 -gt 25165824 ']' 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@132 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 24 --r_mbytes_per_sec 12 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@133 -- # run_fio Malloc0 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.372 05:12:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:23:03.631 05:12:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.631 05:12:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:23:03.631 "tick_rate": 2100000000, 00:23:03.631 "ticks": 2759364376376, 00:23:03.631 "bdevs": [ 00:23:03.631 { 00:23:03.631 "name": "Malloc0", 00:23:03.631 "bytes_read": 1154872832, 00:23:03.631 "num_read_ops": 1126778, 00:23:03.631 "bytes_written": 0, 00:23:03.631 "num_write_ops": 0, 00:23:03.631 "bytes_unmapped": 0, 00:23:03.631 "num_unmap_ops": 0, 00:23:03.631 "bytes_copied": 0, 00:23:03.631 "num_copy_ops": 0, 00:23:03.631 "read_latency_ticks": 1802732799758, 00:23:03.631 "max_read_latency_ticks": 11811074, 00:23:03.631 "min_read_latency_ticks": 12650, 00:23:03.631 "write_latency_ticks": 0, 00:23:03.631 "max_write_latency_ticks": 0, 00:23:03.631 "min_write_latency_ticks": 0, 00:23:03.631 "unmap_latency_ticks": 0, 00:23:03.631 "max_unmap_latency_ticks": 0, 00:23:03.631 "min_unmap_latency_ticks": 0, 00:23:03.631 "copy_latency_ticks": 0, 00:23:03.631 "max_copy_latency_ticks": 0, 00:23:03.631 "min_copy_latency_ticks": 0, 00:23:03.631 "io_error": {} 00:23:03.631 } 00:23:03.631 ] 00:23:03.631 }' 00:23:03.631 05:12:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.631 05:12:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=1126778 00:23:03.631 05:12:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:23:03.631 05:12:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=1154872832 00:23:03.631 05:12:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:23:03.631 [global] 00:23:03.631 thread=1 00:23:03.631 invalidate=1 00:23:03.631 rw=randread 00:23:03.631 time_based=1 00:23:03.631 runtime=5 00:23:03.631 ioengine=libaio 00:23:03.631 direct=1 00:23:03.631 bs=1024 00:23:03.631 iodepth=128 00:23:03.631 norandommap=1 00:23:03.631 numjobs=1 00:23:03.631 00:23:03.631 [job0] 00:23:03.631 filename=/dev/sda 00:23:03.631 queue_depth set to 113 (sda) 00:23:03.631 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:23:03.631 fio-3.35 00:23:03.631 Starting 1 thread 00:23:08.923 00:23:08.923 job0: (groupid=0, jobs=1): err= 0: pid=81473: Wed Jul 24 05:12:23 2024 00:23:08.923 read: IOPS=12.3k, BW=12.0MiB/s (12.6MB/s)(60.1MiB/5010msec) 00:23:08.923 slat (usec): min=2, max=2042, avg=77.95, stdev=229.74 00:23:08.923 clat (usec): min=2056, max=19760, avg=10336.42, stdev=552.46 00:23:08.923 lat (usec): min=2077, max=19775, avg=10414.37, stdev=566.55 00:23:08.923 clat percentiles (usec): 00:23:08.923 | 1.00th=[ 9372], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10028], 00:23:08.923 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10159], 60.00th=[10159], 00:23:08.923 | 70.00th=[10814], 80.00th=[10945], 90.00th=[10945], 95.00th=[10945], 00:23:08.923 | 99.00th=[11076], 99.50th=[11207], 99.90th=[14877], 99.95th=[16909], 00:23:08.923 | 99.99th=[19006] 00:23:08.923 bw ( KiB/s): min=12166, max=12312, per=99.97%, avg=12282.10, stdev=44.33, samples=10 00:23:08.923 iops : min=12166, max=12312, avg=12282.10, stdev=44.33, samples=10 00:23:08.923 lat (msec) : 4=0.06%, 10=7.10%, 20=92.84% 00:23:08.923 cpu : usr=4.03%, sys=9.64%, ctx=35561, majf=0, minf=32 00:23:08.923 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:08.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:08.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:08.923 issued rwts: total=61550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:08.923 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:08.923 00:23:08.923 Run status group 0 (all jobs): 00:23:08.923 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=60.1MiB (63.0MB), run=5010-5010msec 00:23:08.923 00:23:08.923 Disk stats (read/write): 00:23:08.923 sda: ios=60072/0, merge=0/0, ticks=543765/0, in_queue=543765, util=98.09% 00:23:08.923 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:23:08.923 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.923 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:23:08.923 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.923 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:23:08.923 "tick_rate": 2100000000, 00:23:08.923 "ticks": 2770747929036, 00:23:08.923 "bdevs": [ 00:23:08.923 { 00:23:08.923 "name": "Malloc0", 00:23:08.923 "bytes_read": 1217900032, 00:23:08.924 "num_read_ops": 1188328, 00:23:08.924 "bytes_written": 0, 00:23:08.924 "num_write_ops": 0, 00:23:08.924 "bytes_unmapped": 0, 00:23:08.924 "num_unmap_ops": 0, 00:23:08.924 "bytes_copied": 0, 00:23:08.924 "num_copy_ops": 0, 00:23:08.924 "read_latency_ticks": 2415365103460, 00:23:08.924 "max_read_latency_ticks": 14537720, 00:23:08.924 "min_read_latency_ticks": 12650, 00:23:08.924 "write_latency_ticks": 0, 00:23:08.924 "max_write_latency_ticks": 0, 00:23:08.924 "min_write_latency_ticks": 0, 00:23:08.924 "unmap_latency_ticks": 0, 00:23:08.924 "max_unmap_latency_ticks": 0, 00:23:08.924 "min_unmap_latency_ticks": 0, 00:23:08.924 "copy_latency_ticks": 0, 00:23:08.924 "max_copy_latency_ticks": 0, 00:23:08.924 "min_copy_latency_ticks": 0, 00:23:08.924 "io_error": {} 00:23:08.924 } 00:23:08.924 ] 00:23:08.924 }' 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=1188328 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=1217900032 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=12310 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=12605440 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@134 -- # verify_qos_limits 12605440 12582912 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=12605440 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=12582912 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:23:08.924 I/O bandwidth limiting tests successful 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@136 -- # echo 'I/O bandwidth limiting tests successful' 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@138 -- # iscsicleanup 00:23:08.924 Cleaning up iSCSI connection 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:23:08.924 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:23:09.182 Logging out of session [sid: 20, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:23:09.182 Logout of [sid: 20, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@983 -- # rm -rf 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@139 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:Target1 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@141 -- # rm -f ./local-job0-0-verify.state 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@143 -- # killprocess 80850 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@948 -- # '[' -z 80850 ']' 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@952 -- # kill -0 80850 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@953 -- # uname 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80850 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:09.182 killing process with pid 80850 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80850' 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@967 -- # kill 80850 00:23:09.182 05:12:23 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@972 -- # wait 80850 00:23:11.714 05:12:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@145 -- # iscsitestfini 00:23:11.714 05:12:26 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:23:11.714 00:23:11.714 real 0m44.320s 00:23:11.714 user 0m40.884s 00:23:11.714 sys 0m11.761s 00:23:11.714 05:12:26 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:11.714 ************************************ 00:23:11.714 END TEST iscsi_tgt_qos 00:23:11.714 ************************************ 00:23:11.714 05:12:26 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:23:11.714 05:12:26 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@39 -- # run_test iscsi_tgt_ip_migration /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:23:11.714 05:12:26 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:11.714 05:12:26 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:11.714 05:12:26 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:23:11.714 ************************************ 00:23:11.714 START TEST iscsi_tgt_ip_migration 00:23:11.714 ************************************ 00:23:11.714 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:23:11.975 * Looking for test storage... 00:23:11.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@11 -- # iscsitestinit 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@13 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@14 -- # pids=() 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@16 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:23:11.975 #define SPDK_CONFIG_H 00:23:11.975 #define SPDK_CONFIG_APPS 1 00:23:11.975 #define SPDK_CONFIG_ARCH native 00:23:11.975 #define SPDK_CONFIG_ASAN 1 00:23:11.975 #undef SPDK_CONFIG_AVAHI 00:23:11.975 #undef SPDK_CONFIG_CET 00:23:11.975 #define SPDK_CONFIG_COVERAGE 1 00:23:11.975 #define SPDK_CONFIG_CROSS_PREFIX 00:23:11.975 #undef SPDK_CONFIG_CRYPTO 00:23:11.975 #undef SPDK_CONFIG_CRYPTO_MLX5 00:23:11.975 #undef SPDK_CONFIG_CUSTOMOCF 00:23:11.975 #undef SPDK_CONFIG_DAOS 00:23:11.975 #define SPDK_CONFIG_DAOS_DIR 00:23:11.975 #define SPDK_CONFIG_DEBUG 1 00:23:11.975 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:23:11.975 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:23:11.975 #define SPDK_CONFIG_DPDK_INC_DIR 00:23:11.975 #define SPDK_CONFIG_DPDK_LIB_DIR 00:23:11.975 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:23:11.975 #undef SPDK_CONFIG_DPDK_UADK 00:23:11.975 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:11.975 #define SPDK_CONFIG_EXAMPLES 1 00:23:11.975 #undef SPDK_CONFIG_FC 00:23:11.975 #define SPDK_CONFIG_FC_PATH 00:23:11.975 #define SPDK_CONFIG_FIO_PLUGIN 1 00:23:11.975 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:23:11.975 #undef SPDK_CONFIG_FUSE 00:23:11.975 #undef SPDK_CONFIG_FUZZER 00:23:11.975 #define SPDK_CONFIG_FUZZER_LIB 00:23:11.975 #undef SPDK_CONFIG_GOLANG 00:23:11.975 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:23:11.975 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:23:11.975 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:23:11.975 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:23:11.975 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:23:11.975 #undef SPDK_CONFIG_HAVE_LIBBSD 00:23:11.975 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:23:11.975 #define SPDK_CONFIG_IDXD 1 00:23:11.975 #define SPDK_CONFIG_IDXD_KERNEL 1 00:23:11.975 #undef SPDK_CONFIG_IPSEC_MB 00:23:11.975 #define SPDK_CONFIG_IPSEC_MB_DIR 00:23:11.975 #define SPDK_CONFIG_ISAL 1 00:23:11.975 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:23:11.975 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:23:11.975 #define SPDK_CONFIG_LIBDIR 00:23:11.975 #undef SPDK_CONFIG_LTO 00:23:11.975 #define SPDK_CONFIG_MAX_LCORES 128 00:23:11.975 #define SPDK_CONFIG_NVME_CUSE 1 00:23:11.975 #undef SPDK_CONFIG_OCF 00:23:11.975 #define SPDK_CONFIG_OCF_PATH 00:23:11.975 #define SPDK_CONFIG_OPENSSL_PATH 00:23:11.975 #undef SPDK_CONFIG_PGO_CAPTURE 00:23:11.975 #define SPDK_CONFIG_PGO_DIR 00:23:11.975 #undef SPDK_CONFIG_PGO_USE 00:23:11.975 #define SPDK_CONFIG_PREFIX /usr/local 00:23:11.975 #undef SPDK_CONFIG_RAID5F 00:23:11.975 #undef SPDK_CONFIG_RBD 00:23:11.975 #define SPDK_CONFIG_RDMA 1 00:23:11.975 #define SPDK_CONFIG_RDMA_PROV verbs 00:23:11.975 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:23:11.975 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:23:11.975 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:23:11.975 #define SPDK_CONFIG_SHARED 1 00:23:11.975 #undef SPDK_CONFIG_SMA 00:23:11.975 #define SPDK_CONFIG_TESTS 1 00:23:11.975 #undef SPDK_CONFIG_TSAN 00:23:11.975 #define SPDK_CONFIG_UBLK 1 00:23:11.975 #define SPDK_CONFIG_UBSAN 1 00:23:11.975 #undef SPDK_CONFIG_UNIT_TESTS 00:23:11.975 #define SPDK_CONFIG_URING 1 00:23:11.975 #define SPDK_CONFIG_URING_PATH 00:23:11.975 #define SPDK_CONFIG_URING_ZNS 1 00:23:11.975 #undef SPDK_CONFIG_USDT 00:23:11.975 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:23:11.975 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:23:11.975 #undef SPDK_CONFIG_VFIO_USER 00:23:11.975 #define SPDK_CONFIG_VFIO_USER_DIR 00:23:11.975 #define SPDK_CONFIG_VHOST 1 00:23:11.975 #define SPDK_CONFIG_VIRTIO 1 00:23:11.975 #undef SPDK_CONFIG_VTUNE 00:23:11.975 #define SPDK_CONFIG_VTUNE_DIR 00:23:11.975 #define SPDK_CONFIG_WERROR 1 00:23:11.975 #define SPDK_CONFIG_WPDK_DIR 00:23:11.975 #undef SPDK_CONFIG_XNVME 00:23:11.975 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@17 -- # NETMASK=127.0.0.0/24 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@18 -- # MIGRATION_ADDRESS=127.0.0.2 00:23:11.975 Running ip migration tests 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@56 -- # echo 'Running ip migration tests' 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@57 -- # timing_enter start_iscsi_tgt_0 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@58 -- # rpc_first_addr=/var/tmp/spdk0.sock 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@59 -- # iscsi_tgt_start /var/tmp/spdk0.sock 1 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=81626 00:23:11.975 Process pid: 81626 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 81626' 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -m 1 --wait-for-rpc 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 81626 /var/tmp/spdk0.sock 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@829 -- # '[' -z 81626 ']' 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk0.sock 00:23:11.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.975 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:23:11.976 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.976 05:12:26 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:11.976 [2024-07-24 05:12:26.507966] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:23:11.976 [2024-07-24 05:12:26.508085] [ DPDK EAL parameters: iscsi --no-shconf -c 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81626 ] 00:23:12.235 [2024-07-24 05:12:26.667022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.493 [2024-07-24 05:12:26.888190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.061 05:12:27 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.061 05:12:27 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@862 -- # return 0 00:23:13.061 05:12:27 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_set_options -o 30 -a 64 00:23:13.061 05:12:27 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.061 05:12:27 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:13.061 05:12:27 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.061 05:12:27 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk0.sock framework_start_init 00:23:13.061 05:12:27 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.061 05:12:27 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:13.061 [2024-07-24 05:12:27.657456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:13.998 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.998 iscsi_tgt is listening. Running tests... 00:23:13.998 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:23:13.998 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk0.sock bdev_malloc_create 64 512 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:13.999 Malloc0 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@60 -- # timing_exit start_iscsi_tgt_0 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@62 -- # timing_enter start_iscsi_tgt_1 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@63 -- # rpc_second_addr=/var/tmp/spdk1.sock 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@64 -- # iscsi_tgt_start /var/tmp/spdk1.sock 2 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=81666 00:23:13.999 Process pid: 81666 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 81666' 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 81666 /var/tmp/spdk1.sock 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -m 2 --wait-for-rpc 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@829 -- # '[' -z 81666 ']' 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk1.sock 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.999 05:12:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:13.999 [2024-07-24 05:12:28.503142] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:23:13.999 [2024-07-24 05:12:28.503276] [ DPDK EAL parameters: iscsi --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81666 ] 00:23:14.258 [2024-07-24 05:12:28.665141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.517 [2024-07-24 05:12:28.939679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.776 05:12:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.776 05:12:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@862 -- # return 0 00:23:14.776 05:12:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_set_options -o 30 -a 64 00:23:14.776 05:12:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.776 05:12:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:14.776 05:12:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.776 05:12:29 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk1.sock framework_start_init 00:23:14.776 05:12:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.776 05:12:29 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:15.036 [2024-07-24 05:12:29.630062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.973 iscsi_tgt is listening. Running tests... 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk1.sock bdev_malloc_create 64 512 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:15.973 Malloc0 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@65 -- # timing_exit start_iscsi_tgt_1 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@67 -- # rpc_add_target_node /var/tmp/spdk0.sock 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:23:15.973 05:12:30 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@69 -- # sleep 1 00:23:16.909 05:12:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@70 -- # iscsiadm -m discovery -t sendtargets -p 127.0.0.2:3260 00:23:16.909 127.0.0.2:3260,1 iqn.2016-06.io.spdk:target1 00:23:16.909 05:12:31 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@71 -- # sleep 1 00:23:18.288 05:12:32 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@72 -- # iscsiadm -m node --login -p 127.0.0.2:3260 00:23:18.288 Logging in to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:23:18.288 Login to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:23:18.288 05:12:32 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@73 -- # waitforiscsidevices 1 00:23:18.288 05:12:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@116 -- # local num=1 00:23:18.288 05:12:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:23:18.288 05:12:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:23:18.288 05:12:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:23:18.288 05:12:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:23:18.288 05:12:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # n=1 00:23:18.288 05:12:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:23:18.288 05:12:32 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@123 -- # return 0 00:23:18.288 [2024-07-24 05:12:32.529264] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:18.288 05:12:32 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@77 -- # fiopid=81751 00:23:18.288 05:12:32 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 32 -t randrw -r 12 00:23:18.288 05:12:32 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@78 -- # sleep 3 00:23:18.288 [global] 00:23:18.288 thread=1 00:23:18.288 invalidate=1 00:23:18.288 rw=randrw 00:23:18.288 time_based=1 00:23:18.288 runtime=12 00:23:18.288 ioengine=libaio 00:23:18.288 direct=1 00:23:18.288 bs=4096 00:23:18.288 iodepth=32 00:23:18.288 norandommap=1 00:23:18.288 numjobs=1 00:23:18.288 00:23:18.288 [job0] 00:23:18.288 filename=/dev/sda 00:23:18.288 queue_depth set to 113 (sda) 00:23:18.288 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 00:23:18.288 fio-3.35 00:23:18.288 Starting 1 thread 00:23:18.288 [2024-07-24 05:12:32.722794] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:21.585 05:12:35 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@80 -- # rpc_cmd -s /var/tmp/spdk0.sock spdk_kill_instance SIGTERM 00:23:21.585 05:12:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.585 05:12:35 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:22.153 05:12:36 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.153 05:12:36 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@81 -- # wait 81626 00:23:24.059 05:12:38 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@83 -- # rpc_add_target_node /var/tmp/spdk1.sock 00:23:24.059 05:12:38 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:23:24.059 05:12:38 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:23:24.059 05:12:38 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.059 05:12:38 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:24.059 05:12:38 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.059 05:12:38 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:23:24.059 05:12:38 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.059 05:12:38 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:24.059 05:12:38 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.059 05:12:38 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:23:24.059 05:12:38 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@85 -- # wait 81751 00:23:30.624 [2024-07-24 05:12:44.828668] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:30.624 00:23:30.624 job0: (groupid=0, jobs=1): err= 0: pid=81777: Wed Jul 24 05:12:44 2024 00:23:30.624 read: IOPS=11.5k, BW=45.1MiB/s (47.3MB/s)(541MiB/12001msec) 00:23:30.624 slat (usec): min=3, max=228, avg= 4.85, stdev= 2.86 00:23:30.624 clat (usec): min=383, max=5007.5k, avg=1427.40, stdev=55453.06 00:23:30.624 lat (usec): min=397, max=5007.5k, avg=1432.25, stdev=55453.08 00:23:30.624 clat percentiles (usec): 00:23:30.624 | 1.00th=[ 523], 5.00th=[ 586], 10.00th=[ 685], 00:23:30.624 | 20.00th=[ 742], 30.00th=[ 766], 40.00th=[ 783], 00:23:30.624 | 50.00th=[ 799], 60.00th=[ 824], 70.00th=[ 848], 00:23:30.624 | 80.00th=[ 889], 90.00th=[ 996], 95.00th=[ 1057], 00:23:30.624 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[ 1188], 00:23:30.624 | 99.95th=[ 1352], 99.99th=[4999611] 00:23:30.624 bw ( KiB/s): min=32816, max=80816, per=100.00%, avg=73778.86, stdev=14753.56, samples=14 00:23:30.624 iops : min= 8204, max=20204, avg=18444.71, stdev=3688.39, samples=14 00:23:30.624 write: IOPS=11.6k, BW=45.2MiB/s (47.4MB/s)(542MiB/12001msec); 0 zone resets 00:23:30.624 slat (nsec): min=3422, max=95398, avg=4844.21, stdev=2957.67 00:23:30.624 clat (usec): min=286, max=5007.4k, avg=1331.43, stdev=52050.94 00:23:30.624 lat (usec): min=309, max=5007.4k, avg=1336.27, stdev=52050.95 00:23:30.624 clat percentiles (usec): 00:23:30.624 | 1.00th=[ 498], 5.00th=[ 594], 10.00th=[ 668], 00:23:30.624 | 20.00th=[ 701], 30.00th=[ 725], 40.00th=[ 750], 00:23:30.624 | 50.00th=[ 766], 60.00th=[ 791], 70.00th=[ 832], 00:23:30.624 | 80.00th=[ 889], 90.00th=[ 988], 95.00th=[ 1029], 00:23:30.624 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1156], 00:23:30.624 | 99.95th=[ 1270], 99.99th=[4999611] 00:23:30.624 bw ( KiB/s): min=33664, max=80496, per=100.00%, avg=73796.00, stdev=14488.28, samples=14 00:23:30.624 iops : min= 8416, max=20124, avg=18449.00, stdev=3622.07, samples=14 00:23:30.624 lat (usec) : 500=0.74%, 750=31.92%, 1000=58.38% 00:23:30.624 lat (msec) : 2=8.95%, 4=0.01%, >=2000=0.01% 00:23:30.624 cpu : usr=5.21%, sys=10.74%, ctx=21474, majf=0, minf=1 00:23:30.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 00:23:30.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:30.624 issued rwts: total=138559,138762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.624 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:30.624 00:23:30.624 Run status group 0 (all jobs): 00:23:30.624 READ: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=541MiB (568MB), run=12001-12001msec 00:23:30.624 WRITE: bw=45.2MiB/s (47.4MB/s), 45.2MiB/s-45.2MiB/s (47.4MB/s-47.4MB/s), io=542MiB (568MB), run=12001-12001msec 00:23:30.624 00:23:30.624 Disk stats (read/write): 00:23:30.624 sda: ios=136490/136532, merge=0/0, ticks=185765/177213, in_queue=362978, util=99.37% 00:23:30.624 05:12:44 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@87 -- # trap - SIGINT SIGTERM EXIT 00:23:30.624 05:12:44 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@89 -- # iscsicleanup 00:23:30.624 Cleaning up iSCSI connection 00:23:30.624 05:12:44 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:23:30.624 05:12:44 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:23:30.624 Logging out of session [sid: 21, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:23:30.624 Logout of [sid: 21, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:23:30.624 05:12:44 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:23:30.624 05:12:44 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@983 -- # rm -rf 00:23:30.624 05:12:44 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@91 -- # rpc_cmd -s /var/tmp/spdk1.sock spdk_kill_instance SIGTERM 00:23:30.624 05:12:44 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.624 05:12:44 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:31.560 05:12:46 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.560 05:12:46 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@92 -- # wait 81666 00:23:32.954 05:12:47 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@93 -- # iscsitestfini 00:23:32.954 05:12:47 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:23:32.954 00:23:32.954 real 0m21.259s 00:23:32.954 user 0m29.837s 00:23:32.954 sys 0m3.560s 00:23:32.954 05:12:47 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:32.954 05:12:47 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:23:32.954 ************************************ 00:23:32.954 END TEST iscsi_tgt_ip_migration 00:23:32.954 ************************************ 00:23:33.213 05:12:47 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@40 -- # run_test iscsi_tgt_trace_record /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:23:33.213 05:12:47 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:33.213 05:12:47 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.213 05:12:47 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:23:33.213 ************************************ 00:23:33.213 START TEST iscsi_tgt_trace_record 00:23:33.213 ************************************ 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:23:33.213 * Looking for test storage... 00:23:33.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@11 -- # iscsitestinit 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@13 -- # TRACE_TMP_FOLDER=./tmp-trace 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@14 -- # TRACE_RECORD_OUTPUT=./tmp-trace/record.trace 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@15 -- # TRACE_RECORD_NOTICE_LOG=./tmp-trace/record.notice 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@16 -- # TRACE_TOOL_LOG=./tmp-trace/trace.log 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@22 -- # '[' -z 10.0.0.1 ']' 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@27 -- # '[' -z 10.0.0.2 ']' 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@32 -- # NUM_TRACE_ENTRIES=4096 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@33 -- # MALLOC_BDEV_SIZE=64 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@34 -- # MALLOC_BLOCK_SIZE=4096 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@36 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@37 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@39 -- # timing_enter start_iscsi_tgt 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:23:33.213 start iscsi_tgt with trace enabled 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@41 -- # echo 'start iscsi_tgt with trace enabled' 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@43 -- # iscsi_pid=82001 00:23:33.213 Process pid: 82001 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@44 -- # echo 'Process pid: 82001' 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@46 -- # trap 'killprocess $iscsi_pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@42 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xf --num-trace-entries 4096 --tpoint-group all 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@48 -- # waitforlisten 82001 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@829 -- # '[' -z 82001 ']' 00:23:33.213 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.214 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.214 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.214 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.214 05:12:47 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:23:33.473 [2024-07-24 05:12:47.857277] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:23:33.473 [2024-07-24 05:12:47.857439] [ DPDK EAL parameters: iscsi --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82001 ] 00:23:33.473 [2024-07-24 05:12:48.040582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:33.732 [2024-07-24 05:12:48.268954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask all specified. 00:23:33.733 [2024-07-24 05:12:48.269005] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s iscsi -p 82001' to capture a snapshot of events at runtime. 00:23:33.733 [2024-07-24 05:12:48.269028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.733 [2024-07-24 05:12:48.269055] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.733 [2024-07-24 05:12:48.269068] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/iscsi_trace.pid82001 for offline analysis/debug. 00:23:33.733 [2024-07-24 05:12:48.269267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.733 [2024-07-24 05:12:48.270245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.733 [2024-07-24 05:12:48.270325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.733 [2024-07-24 05:12:48.270359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.992 [2024-07-24 05:12:48.512514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:34.561 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.561 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@862 -- # return 0 00:23:34.561 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@50 -- # echo 'iscsi_tgt is listening. Running tests...' 00:23:34.561 iscsi_tgt is listening. Running tests... 00:23:34.561 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@52 -- # timing_exit start_iscsi_tgt 00:23:34.561 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:34.561 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@54 -- # mkdir -p ./tmp-trace 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@56 -- # record_pid=82040 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@57 -- # echo 'Trace record pid: 82040' 00:23:34.820 Trace record pid: 82040 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@59 -- # RPCS= 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@60 -- # RPCS+='iscsi_create_portal_group 1 10.0.0.1:3260\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@61 -- # RPCS+='iscsi_create_initiator_group 2 ANY 10.0.0.2/32\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace_record -s iscsi -p 82001 -f ./tmp-trace/record.trace -q 00:23:34.820 Create bdevs and target nodes 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@63 -- # echo 'Create bdevs and target nodes' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@64 -- # CONNECTION_NUMBER=15 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # seq 0 15 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc0\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target0 Target0_alias Malloc0:0 1:2 256 -d\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc1\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target1 Target1_alias Malloc1:0 1:2 256 -d\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc2\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target2 Target2_alias Malloc2:0 1:2 256 -d\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc3\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target3 Target3_alias Malloc3:0 1:2 256 -d\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc4\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target4 Target4_alias Malloc4:0 1:2 256 -d\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc5\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target5 Target5_alias Malloc5:0 1:2 256 -d\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc6\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target6 Target6_alias Malloc6:0 1:2 256 -d\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc7\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target7 Target7_alias Malloc7:0 1:2 256 -d\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc8\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target8 Target8_alias Malloc8:0 1:2 256 -d\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc9\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target9 Target9_alias Malloc9:0 1:2 256 -d\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc10\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target10 Target10_alias Malloc10:0 1:2 256 -d\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc11\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target11 Target11_alias Malloc11:0 1:2 256 -d\n' 00:23:34.820 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.821 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc12\n' 00:23:34.821 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target12 Target12_alias Malloc12:0 1:2 256 -d\n' 00:23:34.821 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.821 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc13\n' 00:23:34.821 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target13 Target13_alias Malloc13:0 1:2 256 -d\n' 00:23:34.821 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.821 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc14\n' 00:23:34.821 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target14 Target14_alias Malloc14:0 1:2 256 -d\n' 00:23:34.821 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:34.821 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc15\n' 00:23:34.821 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target15 Target15_alias Malloc15:0 1:2 256 -d\n' 00:23:34.821 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:34.821 05:12:49 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # echo -e iscsi_create_portal_group 1 '10.0.0.1:3260\niscsi_create_initiator_group' 2 ANY '10.0.0.2/32\nbdev_malloc_create' 64 4096 -b 'Malloc0\niscsi_create_target_node' Target0 Target0_alias Malloc0:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc1\niscsi_create_target_node' Target1 Target1_alias Malloc1:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc2\niscsi_create_target_node' Target2 Target2_alias Malloc2:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc3\niscsi_create_target_node' Target3 Target3_alias Malloc3:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc4\niscsi_create_target_node' Target4 Target4_alias Malloc4:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc5\niscsi_create_target_node' Target5 Target5_alias Malloc5:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc6\niscsi_create_target_node' Target6 Target6_alias Malloc6:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc7\niscsi_create_target_node' Target7 Target7_alias Malloc7:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc8\niscsi_create_target_node' Target8 Target8_alias Malloc8:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc9\niscsi_create_target_node' Target9 Target9_alias Malloc9:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc10\niscsi_create_target_node' Target10 Target10_alias Malloc10:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc11\niscsi_create_target_node' Target11 Target11_alias Malloc11:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc12\niscsi_create_target_node' Target12 Target12_alias Malloc12:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc13\niscsi_create_target_node' Target13 Target13_alias Malloc13:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc14\niscsi_create_target_node' Target14 Target14_alias Malloc14:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc15\niscsi_create_target_node' Target15 Target15_alias Malloc15:0 1:2 256 '-d\n' 00:23:36.726 Malloc0 00:23:36.726 Malloc1 00:23:36.726 Malloc2 00:23:36.726 Malloc3 00:23:36.726 Malloc4 00:23:36.726 Malloc5 00:23:36.726 Malloc6 00:23:36.726 Malloc7 00:23:36.726 Malloc8 00:23:36.726 Malloc9 00:23:36.726 Malloc10 00:23:36.726 Malloc11 00:23:36.726 Malloc12 00:23:36.726 Malloc13 00:23:36.726 Malloc14 00:23:36.726 Malloc15 00:23:36.726 05:12:50 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@71 -- # sleep 1 00:23:37.661 05:12:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@73 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:23:37.661 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target0 00:23:37.661 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:23:37.661 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:23:37.661 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:23:37.661 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:23:37.661 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:23:37.662 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:23:37.662 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:23:37.662 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:23:37.662 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:23:37.662 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:23:37.662 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:23:37.662 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:23:37.662 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:23:37.662 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:23:37.662 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:23:37.662 05:12:51 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@74 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:23:37.662 [2024-07-24 05:12:52.004976] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.662 [2024-07-24 05:12:52.024731] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.662 [2024-07-24 05:12:52.044383] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.662 [2024-07-24 05:12:52.083882] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.662 [2024-07-24 05:12:52.089605] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.662 [2024-07-24 05:12:52.106377] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.662 [2024-07-24 05:12:52.139488] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.662 [2024-07-24 05:12:52.176334] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.662 [2024-07-24 05:12:52.203823] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.662 [2024-07-24 05:12:52.237863] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.662 [2024-07-24 05:12:52.247914] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.920 [2024-07-24 05:12:52.297319] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.920 [2024-07-24 05:12:52.323087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.920 [2024-07-24 05:12:52.351286] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.920 [2024-07-24 05:12:52.385113] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:23:37.920 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:23:37.920 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:23:37.920 05:12:52 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@75 -- # waitforiscsidevices 16 00:23:37.920 05:12:52 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@116 -- # local num=16 00:23:37.920 05:12:52 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:23:37.920 05:12:52 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:23:37.920 05:12:52 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:23:37.920 05:12:52 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:23:37.920 [2024-07-24 05:12:52.394925] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:37.920 Running FIO 00:23:37.920 05:12:52 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # n=16 00:23:37.920 05:12:52 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@120 -- # '[' 16 -ne 16 ']' 00:23:37.920 05:12:52 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@123 -- # return 0 00:23:37.920 05:12:52 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@77 -- # trap 'iscsicleanup; killprocess $iscsi_pid; killprocess $record_pid; delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:23:37.920 05:12:52 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@79 -- # echo 'Running FIO' 00:23:37.920 05:12:52 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 00:23:37.920 [global] 00:23:37.920 thread=1 00:23:37.920 invalidate=1 00:23:37.920 rw=randrw 00:23:37.920 time_based=1 00:23:37.920 runtime=1 00:23:37.920 ioengine=libaio 00:23:37.920 direct=1 00:23:37.920 bs=131072 00:23:37.920 iodepth=32 00:23:37.920 norandommap=1 00:23:37.920 numjobs=1 00:23:37.920 00:23:37.920 [job0] 00:23:37.920 filename=/dev/sda 00:23:37.920 [job1] 00:23:37.920 filename=/dev/sdb 00:23:37.920 [job2] 00:23:37.920 filename=/dev/sdc 00:23:37.920 [job3] 00:23:37.920 filename=/dev/sdd 00:23:37.920 [job4] 00:23:37.920 filename=/dev/sde 00:23:37.920 [job5] 00:23:37.920 filename=/dev/sdf 00:23:37.920 [job6] 00:23:37.920 filename=/dev/sdg 00:23:37.920 [job7] 00:23:37.920 filename=/dev/sdh 00:23:37.920 [job8] 00:23:37.920 filename=/dev/sdi 00:23:37.920 [job9] 00:23:37.920 filename=/dev/sdj 00:23:37.920 [job10] 00:23:37.920 filename=/dev/sdk 00:23:37.920 [job11] 00:23:37.920 filename=/dev/sdl 00:23:37.920 [job12] 00:23:37.920 filename=/dev/sdm 00:23:37.920 [job13] 00:23:37.920 filename=/dev/sdn 00:23:37.920 [job14] 00:23:37.920 filename=/dev/sdo 00:23:37.920 [job15] 00:23:37.920 filename=/dev/sdp 00:23:38.178 queue_depth set to 113 (sda) 00:23:38.178 queue_depth set to 113 (sdb) 00:23:38.178 queue_depth set to 113 (sdc) 00:23:38.178 queue_depth set to 113 (sdd) 00:23:38.178 queue_depth set to 113 (sde) 00:23:38.437 queue_depth set to 113 (sdf) 00:23:38.437 queue_depth set to 113 (sdg) 00:23:38.437 queue_depth set to 113 (sdh) 00:23:38.437 queue_depth set to 113 (sdi) 00:23:38.437 queue_depth set to 113 (sdj) 00:23:38.437 queue_depth set to 113 (sdk) 00:23:38.437 queue_depth set to 113 (sdl) 00:23:38.437 queue_depth set to 113 (sdm) 00:23:38.437 queue_depth set to 113 (sdn) 00:23:38.437 queue_depth set to 113 (sdo) 00:23:38.437 queue_depth set to 113 (sdp) 00:23:38.695 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.695 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.695 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.695 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.695 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.695 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.695 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.695 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.695 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.695 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.696 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.696 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.696 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.696 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.696 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.696 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:23:38.696 fio-3.35 00:23:38.696 Starting 16 threads 00:23:38.696 [2024-07-24 05:12:53.159359] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.163946] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.168558] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.171531] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.174686] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.177216] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.179520] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.182352] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.184954] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.187273] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.189499] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.192298] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.194796] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.197234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.199482] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:38.696 [2024-07-24 05:12:53.202250] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.604402] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.608465] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.610953] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.613661] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.615675] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.618065] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.620147] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.622822] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.625531] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.627750] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.629975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.634884] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.637232] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.639617] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.642527] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 [2024-07-24 05:12:54.644896] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:23:40.071 00:23:40.071 job0: (groupid=0, jobs=1): err= 0: pid=82422: Wed Jul 24 05:12:54 2024 00:23:40.071 read: IOPS=303, BW=38.0MiB/s (39.8MB/s)(40.2MiB/1060msec) 00:23:40.071 slat (usec): min=6, max=625, avg=21.47, stdev=47.62 00:23:40.071 clat (usec): min=2846, max=69584, avg=13105.15, stdev=6286.61 00:23:40.071 lat (usec): min=2863, max=69609, avg=13126.62, stdev=6285.84 00:23:40.071 clat percentiles (usec): 00:23:40.071 | 1.00th=[ 5407], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:23:40.071 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:23:40.071 | 70.00th=[12780], 80.00th=[13304], 90.00th=[14353], 95.00th=[17171], 00:23:40.071 | 99.00th=[61080], 99.50th=[66847], 99.90th=[69731], 99.95th=[69731], 00:23:40.071 | 99.99th=[69731] 00:23:40.071 bw ( KiB/s): min=36864, max=44544, per=4.31%, avg=40704.00, stdev=5430.58, samples=2 00:23:40.071 iops : min= 288, max= 348, avg=318.00, stdev=42.43, samples=2 00:23:40.071 write: IOPS=331, BW=41.4MiB/s (43.4MB/s)(43.9MiB/1060msec); 0 zone resets 00:23:40.071 slat (usec): min=12, max=570, avg=37.89, stdev=49.71 00:23:40.071 clat (msec): min=19, max=135, avg=84.31, stdev=12.76 00:23:40.071 lat (msec): min=19, max=136, avg=84.35, stdev=12.76 00:23:40.071 clat percentiles (msec): 00:23:40.071 | 1.00th=[ 28], 5.00th=[ 69], 10.00th=[ 77], 20.00th=[ 81], 00:23:40.071 | 30.00th=[ 83], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 87], 00:23:40.071 | 70.00th=[ 88], 80.00th=[ 90], 90.00th=[ 91], 95.00th=[ 94], 00:23:40.071 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 136], 99.95th=[ 136], 00:23:40.071 | 99.99th=[ 136] 00:23:40.071 bw ( KiB/s): min=39936, max=43008, per=4.31%, avg=41472.00, stdev=2172.23, samples=2 00:23:40.071 iops : min= 312, max= 336, avg=324.00, stdev=16.97, samples=2 00:23:40.071 lat (msec) : 4=0.45%, 10=0.89%, 20=45.47%, 50=1.78%, 100=49.18% 00:23:40.071 lat (msec) : 250=2.23% 00:23:40.071 cpu : usr=0.94%, sys=0.85%, ctx=644, majf=0, minf=1 00:23:40.071 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=95.4%, >=64=0.0% 00:23:40.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.071 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.2%, 64=0.0%, >=64=0.0% 00:23:40.071 issued rwts: total=322,351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.071 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.071 job1: (groupid=0, jobs=1): err= 0: pid=82423: Wed Jul 24 05:12:54 2024 00:23:40.071 read: IOPS=480, BW=60.1MiB/s (63.0MB/s)(62.5MiB/1040msec) 00:23:40.071 slat (usec): min=9, max=294, avg=17.91, stdev=23.59 00:23:40.071 clat (usec): min=1292, max=47128, avg=8193.39, stdev=4300.50 00:23:40.071 lat (usec): min=1302, max=47140, avg=8211.31, stdev=4299.60 00:23:40.071 clat percentiles (usec): 00:23:40.071 | 1.00th=[ 1811], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7177], 00:23:40.071 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7832], 00:23:40.071 | 70.00th=[ 7963], 80.00th=[ 8225], 90.00th=[ 8586], 95.00th=[ 9372], 00:23:40.071 | 99.00th=[42730], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:23:40.071 | 99.99th=[46924] 00:23:40.071 bw ( KiB/s): min=62976, max=63488, per=6.69%, avg=63232.00, stdev=362.04, samples=2 00:23:40.071 iops : min= 492, max= 496, avg=494.00, stdev= 2.83, samples=2 00:23:40.071 write: IOPS=525, BW=65.7MiB/s (68.9MB/s)(68.4MiB/1040msec); 0 zone resets 00:23:40.071 slat (usec): min=11, max=1007, avg=26.55, stdev=54.13 00:23:40.071 clat (usec): min=7641, max=90255, avg=53189.64, stdev=7105.81 00:23:40.071 lat (usec): min=7655, max=90279, avg=53216.19, stdev=7105.59 00:23:40.071 clat percentiles (usec): 00:23:40.071 | 1.00th=[20317], 5.00th=[46400], 10.00th=[48497], 20.00th=[50070], 00:23:40.071 | 30.00th=[51643], 40.00th=[52691], 50.00th=[53740], 60.00th=[54264], 00:23:40.071 | 70.00th=[55313], 80.00th=[56361], 90.00th=[58459], 95.00th=[59507], 00:23:40.071 | 99.00th=[77071], 99.50th=[81265], 99.90th=[90702], 99.95th=[90702], 00:23:40.071 | 99.99th=[90702] 00:23:40.071 bw ( KiB/s): min=64768, max=68864, per=6.94%, avg=66816.00, stdev=2896.31, samples=2 00:23:40.071 iops : min= 506, max= 538, avg=522.00, stdev=22.63, samples=2 00:23:40.071 lat (msec) : 2=0.48%, 10=45.94%, 20=1.24%, 50=9.93%, 100=42.41% 00:23:40.071 cpu : usr=0.77%, sys=1.54%, ctx=970, majf=0, minf=1 00:23:40.071 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=97.0%, >=64=0.0% 00:23:40.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.071 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:40.071 issued rwts: total=500,547,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.071 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.071 job2: (groupid=0, jobs=1): err= 0: pid=82435: Wed Jul 24 05:12:54 2024 00:23:40.071 read: IOPS=523, BW=65.4MiB/s (68.6MB/s)(68.5MiB/1047msec) 00:23:40.071 slat (usec): min=9, max=665, avg=20.56, stdev=37.44 00:23:40.071 clat (usec): min=1704, max=52816, avg=8234.78, stdev=3245.48 00:23:40.071 lat (usec): min=1714, max=52831, avg=8255.34, stdev=3252.66 00:23:40.071 clat percentiles (usec): 00:23:40.072 | 1.00th=[ 6128], 5.00th=[ 6980], 10.00th=[ 7242], 20.00th=[ 7570], 00:23:40.072 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8094], 00:23:40.072 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[ 8979], 00:23:40.072 | 99.00th=[13435], 99.50th=[46924], 99.90th=[52691], 99.95th=[52691], 00:23:40.072 | 99.99th=[52691] 00:23:40.072 bw ( KiB/s): min=66816, max=72704, per=7.38%, avg=69760.00, stdev=4163.44, samples=2 00:23:40.072 iops : min= 522, max= 568, avg=545.00, stdev=32.53, samples=2 00:23:40.072 write: IOPS=509, BW=63.6MiB/s (66.7MB/s)(66.6MiB/1047msec); 0 zone resets 00:23:40.072 slat (usec): min=12, max=754, avg=28.20, stdev=50.84 00:23:40.072 clat (usec): min=6493, max=90737, avg=54186.77, stdev=8071.62 00:23:40.072 lat (usec): min=6512, max=90753, avg=54214.97, stdev=8071.99 00:23:40.072 clat percentiles (usec): 00:23:40.072 | 1.00th=[18482], 5.00th=[46400], 10.00th=[48497], 20.00th=[51119], 00:23:40.072 | 30.00th=[52167], 40.00th=[53740], 50.00th=[54789], 60.00th=[55837], 00:23:40.072 | 70.00th=[56886], 80.00th=[57934], 90.00th=[58983], 95.00th=[60556], 00:23:40.072 | 99.00th=[84411], 99.50th=[88605], 99.90th=[90702], 99.95th=[90702], 00:23:40.072 | 99.99th=[90702] 00:23:40.072 bw ( KiB/s): min=64000, max=65280, per=6.71%, avg=64640.00, stdev=905.10, samples=2 00:23:40.072 iops : min= 500, max= 510, avg=505.00, stdev= 7.07, samples=2 00:23:40.072 lat (msec) : 2=0.19%, 10=49.40%, 20=1.39%, 50=6.29%, 100=42.74% 00:23:40.072 cpu : usr=0.76%, sys=1.63%, ctx=1046, majf=0, minf=1 00:23:40.072 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=97.1%, >=64=0.0% 00:23:40.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.072 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:40.072 issued rwts: total=548,533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.072 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.072 job3: (groupid=0, jobs=1): err= 0: pid=82442: Wed Jul 24 05:12:54 2024 00:23:40.072 read: IOPS=509, BW=63.7MiB/s (66.8MB/s)(67.6MiB/1062msec) 00:23:40.072 slat (usec): min=9, max=436, avg=24.46, stdev=46.62 00:23:40.072 clat (usec): min=843, max=67385, avg=7901.65, stdev=4506.71 00:23:40.072 lat (usec): min=864, max=67398, avg=7926.11, stdev=4505.94 00:23:40.072 clat percentiles (usec): 00:23:40.072 | 1.00th=[ 1614], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 6980], 00:23:40.072 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7701], 00:23:40.072 | 70.00th=[ 7898], 80.00th=[ 8160], 90.00th=[ 8717], 95.00th=[ 9372], 00:23:40.072 | 99.00th=[14222], 99.50th=[61080], 99.90th=[67634], 99.95th=[67634], 00:23:40.072 | 99.99th=[67634] 00:23:40.072 bw ( KiB/s): min=68608, max=69120, per=7.29%, avg=68864.00, stdev=362.04, samples=2 00:23:40.072 iops : min= 536, max= 540, avg=538.00, stdev= 2.83, samples=2 00:23:40.072 write: IOPS=534, BW=66.9MiB/s (70.1MB/s)(71.0MiB/1062msec); 0 zone resets 00:23:40.072 slat (usec): min=10, max=703, avg=32.40, stdev=52.90 00:23:40.072 clat (msec): min=2, max=116, avg=52.06, stdev=11.81 00:23:40.072 lat (msec): min=2, max=116, avg=52.09, stdev=11.81 00:23:40.072 clat percentiles (msec): 00:23:40.072 | 1.00th=[ 7], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 50], 00:23:40.072 | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:23:40.072 | 70.00th=[ 54], 80.00th=[ 55], 90.00th=[ 57], 95.00th=[ 63], 00:23:40.072 | 99.00th=[ 104], 99.50th=[ 113], 99.90th=[ 117], 99.95th=[ 117], 00:23:40.072 | 99.99th=[ 117] 00:23:40.072 bw ( KiB/s): min=69120, max=69120, per=7.18%, avg=69120.00, stdev= 0.00, samples=2 00:23:40.072 iops : min= 540, max= 540, avg=540.00, stdev= 0.00, samples=2 00:23:40.072 lat (usec) : 1000=0.27% 00:23:40.072 lat (msec) : 2=0.81%, 4=0.36%, 10=45.90%, 20=2.61%, 50=14.43% 00:23:40.072 lat (msec) : 100=34.90%, 250=0.72% 00:23:40.072 cpu : usr=0.75%, sys=1.89%, ctx=1070, majf=0, minf=1 00:23:40.072 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.2%, >=64=0.0% 00:23:40.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.072 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:40.072 issued rwts: total=541,568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.072 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.072 job4: (groupid=0, jobs=1): err= 0: pid=82446: Wed Jul 24 05:12:54 2024 00:23:40.072 read: IOPS=322, BW=40.3MiB/s (42.3MB/s)(42.9MiB/1063msec) 00:23:40.072 slat (usec): min=9, max=377, avg=21.27, stdev=31.39 00:23:40.072 clat (usec): min=5634, max=62690, avg=12728.24, stdev=4315.34 00:23:40.072 lat (usec): min=5645, max=62705, avg=12749.51, stdev=4314.69 00:23:40.072 clat percentiles (usec): 00:23:40.072 | 1.00th=[ 5669], 5.00th=[10552], 10.00th=[11207], 20.00th=[11600], 00:23:40.072 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:23:40.072 | 70.00th=[12780], 80.00th=[13566], 90.00th=[14877], 95.00th=[15795], 00:23:40.072 | 99.00th=[19792], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:23:40.072 | 99.99th=[62653] 00:23:40.072 bw ( KiB/s): min=38912, max=48384, per=4.62%, avg=43648.00, stdev=6697.72, samples=2 00:23:40.072 iops : min= 304, max= 378, avg=341.00, stdev=52.33, samples=2 00:23:40.072 write: IOPS=334, BW=41.9MiB/s (43.9MB/s)(44.5MiB/1063msec); 0 zone resets 00:23:40.072 slat (usec): min=10, max=305, avg=39.94, stdev=41.57 00:23:40.072 clat (msec): min=6, max=137, avg=82.97, stdev=14.86 00:23:40.072 lat (msec): min=6, max=137, avg=83.01, stdev=14.86 00:23:40.072 clat percentiles (msec): 00:23:40.072 | 1.00th=[ 10], 5.00th=[ 67], 10.00th=[ 74], 20.00th=[ 80], 00:23:40.072 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 86], 00:23:40.072 | 70.00th=[ 87], 80.00th=[ 89], 90.00th=[ 92], 95.00th=[ 99], 00:23:40.072 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 138], 99.95th=[ 138], 00:23:40.072 | 99.99th=[ 138] 00:23:40.072 bw ( KiB/s): min=40704, max=42752, per=4.33%, avg=41728.00, stdev=1448.15, samples=2 00:23:40.072 iops : min= 318, max= 334, avg=326.00, stdev=11.31, samples=2 00:23:40.072 lat (msec) : 10=2.58%, 20=46.78%, 50=1.14%, 100=47.50%, 250=2.00% 00:23:40.072 cpu : usr=0.38%, sys=1.41%, ctx=665, majf=0, minf=1 00:23:40.072 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=95.6%, >=64=0.0% 00:23:40.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.072 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:40.072 issued rwts: total=343,356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.072 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.072 job5: (groupid=0, jobs=1): err= 0: pid=82447: Wed Jul 24 05:12:54 2024 00:23:40.072 read: IOPS=535, BW=66.9MiB/s (70.1MB/s)(69.5MiB/1039msec) 00:23:40.072 slat (usec): min=9, max=1863, avg=24.39, stdev=85.24 00:23:40.072 clat (usec): min=1916, max=41855, avg=7899.76, stdev=2639.13 00:23:40.072 lat (usec): min=1926, max=41868, avg=7924.15, stdev=2638.40 00:23:40.072 clat percentiles (usec): 00:23:40.072 | 1.00th=[ 5735], 5.00th=[ 6849], 10.00th=[ 6980], 20.00th=[ 7177], 00:23:40.072 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:23:40.072 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8979], 00:23:40.072 | 99.00th=[12256], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:23:40.072 | 99.99th=[41681] 00:23:40.072 bw ( KiB/s): min=64256, max=77312, per=7.49%, avg=70784.00, stdev=9231.99, samples=2 00:23:40.072 iops : min= 502, max= 604, avg=553.00, stdev=72.12, samples=2 00:23:40.072 write: IOPS=528, BW=66.0MiB/s (69.3MB/s)(68.6MiB/1039msec); 0 zone resets 00:23:40.072 slat (usec): min=11, max=549, avg=26.53, stdev=43.66 00:23:40.072 clat (usec): min=10699, max=90899, avg=52417.57, stdev=7020.04 00:23:40.072 lat (usec): min=10730, max=90915, avg=52444.10, stdev=7021.54 00:23:40.072 clat percentiles (usec): 00:23:40.072 | 1.00th=[22414], 5.00th=[44827], 10.00th=[47973], 20.00th=[49546], 00:23:40.072 | 30.00th=[50594], 40.00th=[51643], 50.00th=[52167], 60.00th=[53216], 00:23:40.072 | 70.00th=[54789], 80.00th=[55837], 90.00th=[57410], 95.00th=[58983], 00:23:40.072 | 99.00th=[78119], 99.50th=[87557], 99.90th=[90702], 99.95th=[90702], 00:23:40.072 | 99.99th=[90702] 00:23:40.072 bw ( KiB/s): min=64512, max=68864, per=6.92%, avg=66688.00, stdev=3077.33, samples=2 00:23:40.072 iops : min= 504, max= 538, avg=521.00, stdev=24.04, samples=2 00:23:40.072 lat (msec) : 2=0.09%, 4=0.09%, 10=48.60%, 20=1.63%, 50=12.22% 00:23:40.072 lat (msec) : 100=37.38% 00:23:40.072 cpu : usr=0.87%, sys=1.64%, ctx=1051, majf=0, minf=1 00:23:40.072 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.2%, >=64=0.0% 00:23:40.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.072 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:40.073 issued rwts: total=556,549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.073 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.073 job6: (groupid=0, jobs=1): err= 0: pid=82468: Wed Jul 24 05:12:54 2024 00:23:40.073 read: IOPS=502, BW=62.8MiB/s (65.9MB/s)(65.2MiB/1039msec) 00:23:40.073 slat (usec): min=9, max=282, avg=17.31, stdev=19.43 00:23:40.073 clat (usec): min=1787, max=45295, avg=8147.62, stdev=2734.38 00:23:40.073 lat (usec): min=1801, max=45313, avg=8164.93, stdev=2733.90 00:23:40.073 clat percentiles (usec): 00:23:40.073 | 1.00th=[ 4228], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7504], 00:23:40.073 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8094], 00:23:40.073 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8848], 00:23:40.073 | 99.00th=[10683], 99.50th=[40109], 99.90th=[45351], 99.95th=[45351], 00:23:40.073 | 99.99th=[45351] 00:23:40.073 bw ( KiB/s): min=66048, max=66949, per=7.04%, avg=66498.50, stdev=637.10, samples=2 00:23:40.073 iops : min= 516, max= 523, avg=519.50, stdev= 4.95, samples=2 00:23:40.073 write: IOPS=512, BW=64.0MiB/s (67.1MB/s)(66.5MiB/1039msec); 0 zone resets 00:23:40.073 slat (usec): min=10, max=350, avg=24.84, stdev=28.93 00:23:40.073 clat (usec): min=8762, max=90754, avg=54329.24, stdev=7592.67 00:23:40.073 lat (usec): min=8782, max=90790, avg=54354.08, stdev=7595.54 00:23:40.073 clat percentiles (usec): 00:23:40.073 | 1.00th=[21627], 5.00th=[45876], 10.00th=[48497], 20.00th=[51643], 00:23:40.073 | 30.00th=[52691], 40.00th=[53740], 50.00th=[54789], 60.00th=[55837], 00:23:40.073 | 70.00th=[56886], 80.00th=[57934], 90.00th=[59507], 95.00th=[61604], 00:23:40.073 | 99.00th=[82314], 99.50th=[86508], 99.90th=[90702], 99.95th=[90702], 00:23:40.073 | 99.99th=[90702] 00:23:40.073 bw ( KiB/s): min=63615, max=65280, per=6.69%, avg=64447.50, stdev=1177.33, samples=2 00:23:40.073 iops : min= 496, max= 510, avg=503.00, stdev= 9.90, samples=2 00:23:40.073 lat (msec) : 2=0.28%, 10=48.58%, 20=0.85%, 50=6.45%, 100=43.83% 00:23:40.073 cpu : usr=0.48%, sys=1.73%, ctx=978, majf=0, minf=1 00:23:40.073 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=97.1%, >=64=0.0% 00:23:40.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.073 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:40.073 issued rwts: total=522,532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.073 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.073 job7: (groupid=0, jobs=1): err= 0: pid=82529: Wed Jul 24 05:12:54 2024 00:23:40.073 read: IOPS=476, BW=59.6MiB/s (62.5MB/s)(62.5MiB/1049msec) 00:23:40.073 slat (usec): min=9, max=555, avg=23.33, stdev=43.93 00:23:40.073 clat (usec): min=1829, max=52078, avg=7646.62, stdev=2893.36 00:23:40.073 lat (usec): min=1839, max=52112, avg=7669.95, stdev=2892.77 00:23:40.073 clat percentiles (usec): 00:23:40.073 | 1.00th=[ 2606], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 6915], 00:23:40.073 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7635], 00:23:40.073 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 8291], 95.00th=[ 8717], 00:23:40.073 | 99.00th=[12125], 99.50th=[13304], 99.90th=[52167], 99.95th=[52167], 00:23:40.073 | 99.99th=[52167] 00:23:40.073 bw ( KiB/s): min=58880, max=68608, per=6.75%, avg=63744.00, stdev=6878.73, samples=2 00:23:40.073 iops : min= 460, max= 536, avg=498.00, stdev=53.74, samples=2 00:23:40.073 write: IOPS=536, BW=67.1MiB/s (70.3MB/s)(70.4MiB/1049msec); 0 zone resets 00:23:40.073 slat (usec): min=10, max=648, avg=31.37, stdev=44.74 00:23:40.073 clat (msec): min=9, max=102, avg=52.63, stdev= 8.52 00:23:40.073 lat (msec): min=9, max=102, avg=52.66, stdev= 8.52 00:23:40.073 clat percentiles (msec): 00:23:40.073 | 1.00th=[ 21], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:23:40.073 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 53], 60.00th=[ 54], 00:23:40.073 | 70.00th=[ 55], 80.00th=[ 55], 90.00th=[ 57], 95.00th=[ 61], 00:23:40.073 | 99.00th=[ 93], 99.50th=[ 102], 99.90th=[ 103], 99.95th=[ 103], 00:23:40.073 | 99.99th=[ 103] 00:23:40.073 bw ( KiB/s): min=67328, max=69120, per=7.08%, avg=68224.00, stdev=1267.14, samples=2 00:23:40.073 iops : min= 526, max= 540, avg=533.00, stdev= 9.90, samples=2 00:23:40.073 lat (msec) : 2=0.19%, 4=0.28%, 10=45.81%, 20=1.03%, 50=12.51% 00:23:40.073 lat (msec) : 100=39.89%, 250=0.28% 00:23:40.073 cpu : usr=0.95%, sys=1.62%, ctx=968, majf=0, minf=1 00:23:40.073 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=97.1%, >=64=0.0% 00:23:40.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.073 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:40.073 issued rwts: total=500,563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.073 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.073 job8: (groupid=0, jobs=1): err= 0: pid=82530: Wed Jul 24 05:12:54 2024 00:23:40.073 read: IOPS=293, BW=36.7MiB/s (38.5MB/s)(38.6MiB/1052msec) 00:23:40.073 slat (usec): min=8, max=373, avg=19.80, stdev=22.18 00:23:40.073 clat (usec): min=4573, max=61885, avg=12930.77, stdev=4887.34 00:23:40.073 lat (usec): min=4584, max=61904, avg=12950.57, stdev=4887.62 00:23:40.073 clat percentiles (usec): 00:23:40.073 | 1.00th=[ 7046], 5.00th=[10945], 10.00th=[11207], 20.00th=[11600], 00:23:40.073 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:23:40.073 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14746], 95.00th=[16319], 00:23:40.073 | 99.00th=[22676], 99.50th=[56886], 99.90th=[62129], 99.95th=[62129], 00:23:40.073 | 99.99th=[62129] 00:23:40.073 bw ( KiB/s): min=38989, max=39424, per=4.15%, avg=39206.50, stdev=307.59, samples=2 00:23:40.073 iops : min= 304, max= 308, avg=306.00, stdev= 2.83, samples=2 00:23:40.073 write: IOPS=333, BW=41.7MiB/s (43.7MB/s)(43.9MiB/1052msec); 0 zone resets 00:23:40.073 slat (usec): min=11, max=760, avg=40.11, stdev=57.46 00:23:40.073 clat (msec): min=18, max=131, avg=84.25, stdev=11.93 00:23:40.073 lat (msec): min=18, max=131, avg=84.29, stdev=11.93 00:23:40.073 clat percentiles (msec): 00:23:40.073 | 1.00th=[ 30], 5.00th=[ 68], 10.00th=[ 77], 20.00th=[ 80], 00:23:40.073 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 87], 00:23:40.073 | 70.00th=[ 89], 80.00th=[ 91], 90.00th=[ 93], 95.00th=[ 96], 00:23:40.073 | 99.00th=[ 120], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 132], 00:23:40.073 | 99.99th=[ 132] 00:23:40.073 bw ( KiB/s): min=39759, max=42752, per=4.28%, avg=41255.50, stdev=2116.37, samples=2 00:23:40.073 iops : min= 310, max= 334, avg=322.00, stdev=16.97, samples=2 00:23:40.073 lat (msec) : 10=0.91%, 20=45.45%, 50=1.21%, 100=50.76%, 250=1.67% 00:23:40.073 cpu : usr=0.76%, sys=1.24%, ctx=613, majf=0, minf=1 00:23:40.073 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=95.3%, >=64=0.0% 00:23:40.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.073 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.2%, 64=0.0%, >=64=0.0% 00:23:40.073 issued rwts: total=309,351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.073 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.073 job9: (groupid=0, jobs=1): err= 0: pid=82531: Wed Jul 24 05:12:54 2024 00:23:40.073 read: IOPS=537, BW=67.2MiB/s (70.5MB/s)(69.8MiB/1038msec) 00:23:40.073 slat (usec): min=6, max=509, avg=21.80, stdev=38.69 00:23:40.073 clat (usec): min=3478, max=45924, avg=8021.72, stdev=3171.83 00:23:40.073 lat (usec): min=3494, max=45939, avg=8043.52, stdev=3169.25 00:23:40.073 clat percentiles (usec): 00:23:40.073 | 1.00th=[ 4359], 5.00th=[ 6652], 10.00th=[ 6915], 20.00th=[ 7177], 00:23:40.073 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7898], 00:23:40.073 | 70.00th=[ 8029], 80.00th=[ 8291], 90.00th=[ 8848], 95.00th=[ 9765], 00:23:40.073 | 99.00th=[11994], 99.50th=[44303], 99.90th=[45876], 99.95th=[45876], 00:23:40.073 | 99.99th=[45876] 00:23:40.073 bw ( KiB/s): min=65792, max=76184, per=7.51%, avg=70988.00, stdev=7348.25, samples=2 00:23:40.073 iops : min= 514, max= 595, avg=554.50, stdev=57.28, samples=2 00:23:40.073 write: IOPS=527, BW=66.0MiB/s (69.2MB/s)(68.5MiB/1038msec); 0 zone resets 00:23:40.073 slat (usec): min=8, max=665, avg=30.77, stdev=55.03 00:23:40.073 clat (usec): min=9873, max=90916, avg=52246.41, stdev=7705.35 00:23:40.073 lat (usec): min=9901, max=90931, avg=52277.17, stdev=7701.70 00:23:40.073 clat percentiles (usec): 00:23:40.073 | 1.00th=[20317], 5.00th=[43254], 10.00th=[46924], 20.00th=[49546], 00:23:40.073 | 30.00th=[50594], 40.00th=[51119], 50.00th=[52691], 60.00th=[53216], 00:23:40.073 | 70.00th=[54789], 80.00th=[55837], 90.00th=[57410], 95.00th=[59507], 00:23:40.073 | 99.00th=[81265], 99.50th=[85459], 99.90th=[90702], 99.95th=[90702], 00:23:40.073 | 99.99th=[90702] 00:23:40.073 bw ( KiB/s): min=64641, max=68864, per=6.93%, avg=66752.50, stdev=2986.11, samples=2 00:23:40.073 iops : min= 505, max= 538, avg=521.50, stdev=23.33, samples=2 00:23:40.073 lat (msec) : 4=0.36%, 10=48.64%, 20=1.54%, 50=12.84%, 100=36.62% 00:23:40.073 cpu : usr=0.58%, sys=1.54%, ctx=1156, majf=0, minf=1 00:23:40.073 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.2%, >=64=0.0% 00:23:40.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.073 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:40.073 issued rwts: total=558,548,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.073 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.073 job10: (groupid=0, jobs=1): err= 0: pid=82532: Wed Jul 24 05:12:54 2024 00:23:40.073 read: IOPS=528, BW=66.0MiB/s (69.2MB/s)(69.5MiB/1053msec) 00:23:40.073 slat (usec): min=5, max=1555, avg=20.59, stdev=70.82 00:23:40.073 clat (usec): min=909, max=58546, avg=8052.69, stdev=3130.00 00:23:40.073 lat (usec): min=916, max=58567, avg=8073.28, stdev=3129.03 00:23:40.073 clat percentiles (usec): 00:23:40.073 | 1.00th=[ 1778], 5.00th=[ 6980], 10.00th=[ 7242], 20.00th=[ 7504], 00:23:40.073 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8094], 00:23:40.073 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8586], 95.00th=[ 8979], 00:23:40.074 | 99.00th=[11863], 99.50th=[12387], 99.90th=[58459], 99.95th=[58459], 00:23:40.074 | 99.99th=[58459] 00:23:40.074 bw ( KiB/s): min=65536, max=76288, per=7.51%, avg=70912.00, stdev=7602.81, samples=2 00:23:40.074 iops : min= 512, max= 596, avg=554.00, stdev=59.40, samples=2 00:23:40.074 write: IOPS=509, BW=63.7MiB/s (66.8MB/s)(67.1MiB/1053msec); 0 zone resets 00:23:40.074 slat (usec): min=7, max=637, avg=25.70, stdev=41.44 00:23:40.074 clat (msec): min=2, max=102, avg=54.18, stdev=10.38 00:23:40.074 lat (msec): min=2, max=102, avg=54.21, stdev=10.38 00:23:40.074 clat percentiles (msec): 00:23:40.074 | 1.00th=[ 10], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 52], 00:23:40.074 | 30.00th=[ 53], 40.00th=[ 54], 50.00th=[ 55], 60.00th=[ 56], 00:23:40.074 | 70.00th=[ 57], 80.00th=[ 58], 90.00th=[ 61], 95.00th=[ 64], 00:23:40.074 | 99.00th=[ 99], 99.50th=[ 101], 99.90th=[ 103], 99.95th=[ 103], 00:23:40.074 | 99.99th=[ 103] 00:23:40.074 bw ( KiB/s): min=64768, max=65280, per=6.75%, avg=65024.00, stdev=362.04, samples=2 00:23:40.074 iops : min= 506, max= 510, avg=508.00, stdev= 2.83, samples=2 00:23:40.074 lat (usec) : 1000=0.27% 00:23:40.074 lat (msec) : 2=0.27%, 4=0.82%, 10=49.22%, 20=1.01%, 50=6.50% 00:23:40.074 lat (msec) : 100=41.54%, 250=0.37% 00:23:40.074 cpu : usr=0.95%, sys=1.33%, ctx=945, majf=0, minf=1 00:23:40.074 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=97.2%, >=64=0.0% 00:23:40.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.074 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:40.074 issued rwts: total=556,537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.074 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.074 job11: (groupid=0, jobs=1): err= 0: pid=82533: Wed Jul 24 05:12:54 2024 00:23:40.074 read: IOPS=545, BW=68.2MiB/s (71.5MB/s)(71.5MiB/1048msec) 00:23:40.074 slat (usec): min=8, max=544, avg=20.24, stdev=41.38 00:23:40.074 clat (usec): min=1996, max=52627, avg=8071.81, stdev=4436.11 00:23:40.074 lat (usec): min=2006, max=52638, avg=8092.05, stdev=4434.10 00:23:40.074 clat percentiles (usec): 00:23:40.074 | 1.00th=[ 3818], 5.00th=[ 6521], 10.00th=[ 6783], 20.00th=[ 7046], 00:23:40.074 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7701], 00:23:40.074 | 70.00th=[ 7963], 80.00th=[ 8225], 90.00th=[ 8717], 95.00th=[ 9634], 00:23:40.074 | 99.00th=[47973], 99.50th=[49546], 99.90th=[52691], 99.95th=[52691], 00:23:40.074 | 99.99th=[52691] 00:23:40.074 bw ( KiB/s): min=71936, max=73106, per=7.68%, avg=72521.00, stdev=827.31, samples=2 00:23:40.074 iops : min= 562, max= 571, avg=566.50, stdev= 6.36, samples=2 00:23:40.074 write: IOPS=534, BW=66.8MiB/s (70.0MB/s)(70.0MiB/1048msec); 0 zone resets 00:23:40.074 slat (usec): min=9, max=1013, avg=33.26, stdev=72.77 00:23:40.074 clat (usec): min=8579, max=93924, avg=51424.42, stdev=7507.65 00:23:40.074 lat (usec): min=8611, max=93942, avg=51457.68, stdev=7504.29 00:23:40.074 clat percentiles (usec): 00:23:40.074 | 1.00th=[20055], 5.00th=[42730], 10.00th=[45351], 20.00th=[47973], 00:23:40.074 | 30.00th=[49546], 40.00th=[50594], 50.00th=[51643], 60.00th=[52691], 00:23:40.074 | 70.00th=[53740], 80.00th=[54789], 90.00th=[56886], 95.00th=[58459], 00:23:40.074 | 99.00th=[79168], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:23:40.074 | 99.99th=[93848] 00:23:40.074 bw ( KiB/s): min=67206, max=69632, per=7.10%, avg=68419.00, stdev=1715.44, samples=2 00:23:40.074 iops : min= 525, max= 544, avg=534.50, stdev=13.44, samples=2 00:23:40.074 lat (msec) : 2=0.09%, 4=0.44%, 10=48.67%, 20=1.24%, 50=15.64% 00:23:40.074 lat (msec) : 100=33.92% 00:23:40.074 cpu : usr=0.48%, sys=1.91%, ctx=1001, majf=0, minf=1 00:23:40.074 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.3%, >=64=0.0% 00:23:40.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.074 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:40.074 issued rwts: total=572,560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.074 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.074 job12: (groupid=0, jobs=1): err= 0: pid=82534: Wed Jul 24 05:12:54 2024 00:23:40.074 read: IOPS=376, BW=47.0MiB/s (49.3MB/s)(49.8MiB/1058msec) 00:23:40.074 slat (usec): min=6, max=1613, avg=23.71, stdev=87.07 00:23:40.074 clat (usec): min=5417, max=66692, avg=13017.52, stdev=5349.97 00:23:40.074 lat (usec): min=5427, max=66703, avg=13041.23, stdev=5346.50 00:23:40.074 clat percentiles (usec): 00:23:40.074 | 1.00th=[ 7898], 5.00th=[11076], 10.00th=[11338], 20.00th=[11469], 00:23:40.074 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:23:40.074 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14484], 95.00th=[16712], 00:23:40.074 | 99.00th=[58459], 99.50th=[64226], 99.90th=[66847], 99.95th=[66847], 00:23:40.074 | 99.99th=[66847] 00:23:40.074 bw ( KiB/s): min=49152, max=51712, per=5.34%, avg=50432.00, stdev=1810.19, samples=2 00:23:40.074 iops : min= 384, max= 404, avg=394.00, stdev=14.14, samples=2 00:23:40.074 write: IOPS=331, BW=41.5MiB/s (43.5MB/s)(43.9MiB/1058msec); 0 zone resets 00:23:40.074 slat (usec): min=8, max=422, avg=34.02, stdev=40.91 00:23:40.074 clat (msec): min=19, max=129, avg=81.40, stdev=11.51 00:23:40.074 lat (msec): min=19, max=129, avg=81.44, stdev=11.51 00:23:40.074 clat percentiles (msec): 00:23:40.074 | 1.00th=[ 31], 5.00th=[ 72], 10.00th=[ 75], 20.00th=[ 79], 00:23:40.074 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 83], 00:23:40.074 | 70.00th=[ 85], 80.00th=[ 86], 90.00th=[ 88], 95.00th=[ 93], 00:23:40.074 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 130], 99.95th=[ 130], 00:23:40.074 | 99.99th=[ 130] 00:23:40.074 bw ( KiB/s): min=39936, max=42752, per=4.29%, avg=41344.00, stdev=1991.21, samples=2 00:23:40.074 iops : min= 312, max= 334, avg=323.00, stdev=15.56, samples=2 00:23:40.074 lat (msec) : 10=0.93%, 20=51.40%, 50=1.34%, 100=44.59%, 250=1.74% 00:23:40.074 cpu : usr=0.76%, sys=1.04%, ctx=704, majf=0, minf=1 00:23:40.074 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=95.9%, >=64=0.0% 00:23:40.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.074 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:40.074 issued rwts: total=398,351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.074 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.074 job13: (groupid=0, jobs=1): err= 0: pid=82535: Wed Jul 24 05:12:54 2024 00:23:40.074 read: IOPS=533, BW=66.7MiB/s (69.9MB/s)(69.8MiB/1046msec) 00:23:40.074 slat (usec): min=6, max=836, avg=18.84, stdev=40.24 00:23:40.074 clat (usec): min=506, max=50714, avg=7726.50, stdev=3296.91 00:23:40.074 lat (usec): min=541, max=50733, avg=7745.34, stdev=3296.28 00:23:40.074 clat percentiles (usec): 00:23:40.074 | 1.00th=[ 1844], 5.00th=[ 5604], 10.00th=[ 6783], 20.00th=[ 7111], 00:23:40.074 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7832], 00:23:40.074 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8586], 95.00th=[ 9110], 00:23:40.074 | 99.00th=[12780], 99.50th=[46924], 99.90th=[50594], 99.95th=[50594], 00:23:40.074 | 99.99th=[50594] 00:23:40.074 bw ( KiB/s): min=66560, max=75671, per=7.53%, avg=71115.50, stdev=6442.45, samples=2 00:23:40.074 iops : min= 520, max= 591, avg=555.50, stdev=50.20, samples=2 00:23:40.074 write: IOPS=529, BW=66.2MiB/s (69.4MB/s)(69.2MiB/1046msec); 0 zone resets 00:23:40.074 slat (usec): min=6, max=1165, avg=29.02, stdev=71.64 00:23:40.074 clat (usec): min=1724, max=91685, avg=52436.38, stdev=9186.46 00:23:40.074 lat (usec): min=1765, max=91707, avg=52465.40, stdev=9188.39 00:23:40.074 clat percentiles (usec): 00:23:40.074 | 1.00th=[ 7308], 5.00th=[44827], 10.00th=[47973], 20.00th=[50070], 00:23:40.074 | 30.00th=[51119], 40.00th=[51643], 50.00th=[52691], 60.00th=[53740], 00:23:40.074 | 70.00th=[54789], 80.00th=[55837], 90.00th=[57934], 95.00th=[62653], 00:23:40.074 | 99.00th=[79168], 99.50th=[86508], 99.90th=[91751], 99.95th=[91751], 00:23:40.074 | 99.99th=[91751] 00:23:40.074 bw ( KiB/s): min=65923, max=68352, per=6.97%, avg=67137.50, stdev=1717.56, samples=2 00:23:40.074 iops : min= 515, max= 534, avg=524.50, stdev=13.44, samples=2 00:23:40.074 lat (usec) : 750=0.27% 00:23:40.074 lat (msec) : 2=0.36%, 4=1.35%, 10=48.02%, 20=0.90%, 50=9.53% 00:23:40.074 lat (msec) : 100=39.57% 00:23:40.074 cpu : usr=0.77%, sys=1.63%, ctx=1021, majf=0, minf=1 00:23:40.074 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.2%, >=64=0.0% 00:23:40.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.074 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:40.074 issued rwts: total=558,554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.074 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.074 job14: (groupid=0, jobs=1): err= 0: pid=82536: Wed Jul 24 05:12:54 2024 00:23:40.074 read: IOPS=497, BW=62.1MiB/s (65.2MB/s)(64.8MiB/1042msec) 00:23:40.074 slat (usec): min=6, max=203, avg=16.23, stdev=13.98 00:23:40.074 clat (usec): min=1649, max=48187, avg=8266.06, stdev=3510.13 00:23:40.074 lat (usec): min=1672, max=48198, avg=8282.29, stdev=3510.22 00:23:40.074 clat percentiles (usec): 00:23:40.074 | 1.00th=[ 5800], 5.00th=[ 7046], 10.00th=[ 7177], 20.00th=[ 7504], 00:23:40.074 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8094], 00:23:40.074 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:23:40.074 | 99.00th=[13173], 99.50th=[46400], 99.90th=[47973], 99.95th=[47973], 00:23:40.074 | 99.99th=[47973] 00:23:40.074 bw ( KiB/s): min=62720, max=68864, per=6.96%, avg=65792.00, stdev=4344.46, samples=2 00:23:40.074 iops : min= 490, max= 538, avg=514.00, stdev=33.94, samples=2 00:23:40.074 write: IOPS=510, BW=63.8MiB/s (66.9MB/s)(66.5MiB/1042msec); 0 zone resets 00:23:40.074 slat (usec): min=6, max=607, avg=26.14, stdev=42.41 00:23:40.074 clat (usec): min=9138, max=91309, avg=54488.52, stdev=7727.16 00:23:40.074 lat (usec): min=9154, max=91324, avg=54514.67, stdev=7729.04 00:23:40.074 clat percentiles (usec): 00:23:40.074 | 1.00th=[21103], 5.00th=[47449], 10.00th=[50070], 20.00th=[51643], 00:23:40.074 | 30.00th=[52691], 40.00th=[53740], 50.00th=[54789], 60.00th=[55313], 00:23:40.074 | 70.00th=[56361], 80.00th=[57410], 90.00th=[59507], 95.00th=[61604], 00:23:40.075 | 99.00th=[83362], 99.50th=[89654], 99.90th=[91751], 99.95th=[91751], 00:23:40.075 | 99.99th=[91751] 00:23:40.075 bw ( KiB/s): min=64000, max=65280, per=6.71%, avg=64640.00, stdev=905.10, samples=2 00:23:40.075 iops : min= 500, max= 510, avg=505.00, stdev= 7.07, samples=2 00:23:40.075 lat (msec) : 2=0.10%, 10=48.29%, 20=1.05%, 50=5.43%, 100=45.14% 00:23:40.075 cpu : usr=0.48%, sys=1.73%, ctx=973, majf=0, minf=1 00:23:40.075 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=97.0%, >=64=0.0% 00:23:40.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.075 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:40.075 issued rwts: total=518,532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.075 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.075 job15: (groupid=0, jobs=1): err= 0: pid=82537: Wed Jul 24 05:12:54 2024 00:23:40.075 read: IOPS=524, BW=65.6MiB/s (68.8MB/s)(68.1MiB/1039msec) 00:23:40.075 slat (usec): min=6, max=322, avg=18.99, stdev=25.61 00:23:40.075 clat (usec): min=2330, max=46528, avg=7800.54, stdev=3259.76 00:23:40.075 lat (usec): min=2355, max=46546, avg=7819.53, stdev=3258.83 00:23:40.075 clat percentiles (usec): 00:23:40.075 | 1.00th=[ 4015], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 6980], 00:23:40.075 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7635], 00:23:40.075 | 70.00th=[ 7898], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 9110], 00:23:40.075 | 99.00th=[12518], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:23:40.075 | 99.99th=[46400] 00:23:40.075 bw ( KiB/s): min=64000, max=74347, per=7.32%, avg=69173.50, stdev=7316.43, samples=2 00:23:40.075 iops : min= 500, max= 580, avg=540.00, stdev=56.57, samples=2 00:23:40.075 write: IOPS=546, BW=68.3MiB/s (71.7MB/s)(71.0MiB/1039msec); 0 zone resets 00:23:40.075 slat (usec): min=8, max=7837, avg=41.24, stdev=332.93 00:23:40.075 clat (usec): min=7462, max=93502, avg=50763.14, stdev=9958.30 00:23:40.075 lat (usec): min=7778, max=93513, avg=50804.38, stdev=9903.22 00:23:40.075 clat percentiles (usec): 00:23:40.075 | 1.00th=[ 8356], 5.00th=[39584], 10.00th=[46400], 20.00th=[48497], 00:23:40.075 | 30.00th=[50070], 40.00th=[51119], 50.00th=[51643], 60.00th=[52691], 00:23:40.075 | 70.00th=[53740], 80.00th=[54789], 90.00th=[56361], 95.00th=[57934], 00:23:40.075 | 99.00th=[85459], 99.50th=[93848], 99.90th=[93848], 99.95th=[93848], 00:23:40.075 | 99.99th=[93848] 00:23:40.075 bw ( KiB/s): min=69120, max=69237, per=7.18%, avg=69178.50, stdev=82.73, samples=2 00:23:40.075 iops : min= 540, max= 540, avg=540.00, stdev= 0.00, samples=2 00:23:40.075 lat (msec) : 4=0.36%, 10=48.34%, 20=1.53%, 50=13.84%, 100=35.94% 00:23:40.075 cpu : usr=0.77%, sys=1.64%, ctx=974, majf=0, minf=1 00:23:40.075 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.2%, >=64=0.0% 00:23:40.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.075 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:23:40.075 issued rwts: total=545,568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.075 latency : target=0, window=0, percentile=100.00%, depth=32 00:23:40.075 00:23:40.075 Run status group 0 (all jobs): 00:23:40.075 READ: bw=923MiB/s (967MB/s), 36.7MiB/s-68.2MiB/s (38.5MB/s-71.5MB/s), io=981MiB (1028MB), run=1038-1063msec 00:23:40.075 WRITE: bw=941MiB/s (986MB/s), 41.4MiB/s-68.3MiB/s (43.4MB/s-71.7MB/s), io=1000MiB (1049MB), run=1038-1063msec 00:23:40.075 00:23:40.075 Disk stats (read/write): 00:23:40.075 sda: ios=321/270, merge=0/0, ticks=3394/22249, in_queue=25644, util=70.84% 00:23:40.075 sdb: ios=450/435, merge=0/0, ticks=3130/22909, in_queue=26039, util=71.15% 00:23:40.075 sdc: ios=510/426, merge=0/0, ticks=3645/22505, in_queue=26150, util=72.57% 00:23:40.075 sdd: ios=507/470, merge=0/0, ticks=3370/23306, in_queue=26677, util=73.78% 00:23:40.075 sde: ios=322/275, merge=0/0, ticks=3633/22129, in_queue=25762, util=74.14% 00:23:40.075 sdf: ios=500/434, merge=0/0, ticks=3634/22394, in_queue=26028, util=72.86% 00:23:40.075 sdg: ios=440/423, merge=0/0, ticks=3471/22623, in_queue=26095, util=73.18% 00:23:40.075 sdh: ios=424/452, merge=0/0, ticks=3113/23035, in_queue=26148, util=77.38% 00:23:40.075 sdi: ios=266/269, merge=0/0, ticks=3303/22282, in_queue=25585, util=77.92% 00:23:40.075 sdj: ios=490/435, merge=0/0, ticks=3770/22187, in_queue=25957, util=79.77% 00:23:40.075 sdk: ios=492/433, merge=0/0, ticks=3826/22688, in_queue=26514, util=81.72% 00:23:40.075 sdl: ios=464/453, merge=0/0, ticks=3469/22644, in_queue=26114, util=82.42% 00:23:40.075 sdm: ios=339/269, merge=0/0, ticks=4232/21399, in_queue=25632, util=82.53% 00:23:40.075 sdn: ios=479/445, merge=0/0, ticks=3537/22825, in_queue=26363, util=85.14% 00:23:40.075 sdo: ios=426/423, merge=0/0, ticks=3362/22670, in_queue=26032, util=84.02% 00:23:40.075 sdp: ios=464/471, merge=0/0, ticks=3387/23221, in_queue=26609, util=89.18% 00:23:40.075 05:12:54 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@82 -- # iscsicleanup 00:23:40.075 Cleaning up iSCSI connection 00:23:40.075 05:12:54 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:23:40.075 05:12:54 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:23:40.643 Logging out of session [sid: 22, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 23, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 24, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 25, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 26, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 27, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 28, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 29, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 30, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 31, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 32, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 33, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 34, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 35, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 36, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:23:40.643 Logging out of session [sid: 37, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:23:40.643 Logout of [sid: 22, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 23, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 24, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 25, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 26, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 27, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 28, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 29, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 30, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 31, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 32, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 33, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 34, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 35, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 36, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:23:40.643 Logout of [sid: 37, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@983 -- # rm -rf 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@84 -- # RPCS= 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # seq 0 15 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target0\n' 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc0\n' 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target1\n' 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc1\n' 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target2\n' 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc2\n' 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target3\n' 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc3\n' 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target4\n' 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc4\n' 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target5\n' 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc5\n' 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target6\n' 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc6\n' 00:23:40.643 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target7\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc7\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target8\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc8\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target9\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc9\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target10\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc10\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target11\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc11\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target12\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc12\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target13\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc13\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target14\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc14\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target15\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc15\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # echo -e iscsi_delete_target_node 'iqn.2016-06.io.spdk:Target0\nbdev_malloc_delete' 'Malloc0\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target1\nbdev_malloc_delete' 'Malloc1\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target2\nbdev_malloc_delete' 'Malloc2\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target3\nbdev_malloc_delete' 'Malloc3\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target4\nbdev_malloc_delete' 'Malloc4\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target5\nbdev_malloc_delete' 'Malloc5\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target6\nbdev_malloc_delete' 'Malloc6\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target7\nbdev_malloc_delete' 'Malloc7\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target8\nbdev_malloc_delete' 'Malloc8\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target9\nbdev_malloc_delete' 'Malloc9\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target10\nbdev_malloc_delete' 'Malloc10\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target11\nbdev_malloc_delete' 'Malloc11\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target12\nbdev_malloc_delete' 'Malloc12\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target13\nbdev_malloc_delete' 'Malloc13\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target14\nbdev_malloc_delete' 'Malloc14\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target15\nbdev_malloc_delete' 'Malloc15\n' 00:23:40.644 05:12:55 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:44.834 05:12:58 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@92 -- # trap 'delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.834 05:12:58 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@94 -- # killprocess 82001 00:23:44.834 05:12:58 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@948 -- # '[' -z 82001 ']' 00:23:44.834 05:12:58 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@952 -- # kill -0 82001 00:23:44.834 05:12:58 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # uname 00:23:44.834 05:12:58 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.834 05:12:58 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82001 00:23:44.834 05:12:58 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:44.834 05:12:58 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:44.834 killing process with pid 82001 00:23:44.834 05:12:58 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82001' 00:23:44.834 05:12:58 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@967 -- # kill 82001 00:23:44.834 05:12:58 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@972 -- # wait 82001 00:23:46.734 05:13:01 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@95 -- # killprocess 82040 00:23:46.734 05:13:01 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@948 -- # '[' -z 82040 ']' 00:23:46.734 05:13:01 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@952 -- # kill -0 82040 00:23:46.734 05:13:01 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # uname 00:23:46.734 05:13:01 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:46.734 05:13:01 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82040 00:23:46.734 05:13:01 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # process_name=spdk_trace_reco 00:23:46.734 05:13:01 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@958 -- # '[' spdk_trace_reco = sudo ']' 00:23:46.734 killing process with pid 82040 00:23:46.734 05:13:01 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82040' 00:23:46.734 05:13:01 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@967 -- # kill 82040 00:23:46.734 05:13:01 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@972 -- # wait 82040 00:23:46.734 05:13:01 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@96 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace -f ./tmp-trace/record.trace 00:23:58.940 05:13:12 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # grep 'trace entries for lcore' ./tmp-trace/record.notice 00:23:58.940 05:13:12 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # cut -d ' ' -f 2 00:23:58.940 05:13:12 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # record_num='149820 00:23:58.940 101330 00:23:58.940 151993 00:23:58.940 154042' 00:23:58.940 05:13:12 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # cut -d ' ' -f 6 00:23:58.940 05:13:12 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # grep 'Trace Size of lcore' ./tmp-trace/trace.log 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # trace_tool_num='149820 00:23:58.940 101330 00:23:58.940 151993 00:23:58.940 154042' 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@105 -- # delete_tmp_files 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@19 -- # rm -rf ./tmp-trace 00:23:58.940 entries numbers from trace record are: 149820 101330 151993 154042 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@107 -- # echo 'entries numbers from trace record are:' 149820 101330 151993 154042 00:23:58.940 entries numbers from trace tool are: 149820 101330 151993 154042 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@108 -- # echo 'entries numbers from trace tool are:' 149820 101330 151993 154042 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@110 -- # arr_record_num=($record_num) 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@111 -- # arr_trace_tool_num=($trace_tool_num) 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@112 -- # len_arr_record_num=4 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@113 -- # len_arr_trace_tool_num=4 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@116 -- # '[' 4 -ne 4 ']' 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # seq 0 3 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 149820 -le 4096 ']' 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 149820 -ne 149820 ']' 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 101330 -le 4096 ']' 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 101330 -ne 101330 ']' 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 151993 -le 4096 ']' 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 151993 -ne 151993 ']' 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 154042 -le 4096 ']' 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 154042 -ne 154042 ']' 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@135 -- # trap - SIGINT SIGTERM EXIT 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@136 -- # iscsitestfini 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:23:58.940 00:23:58.940 real 0m25.467s 00:23:58.940 user 1m10.765s 00:23:58.940 sys 0m4.260s 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:23:58.940 ************************************ 00:23:58.940 END TEST iscsi_tgt_trace_record 00:23:58.940 ************************************ 00:23:58.940 05:13:13 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@41 -- # run_test iscsi_tgt_login_redirection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:23:58.940 05:13:13 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:58.940 05:13:13 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:58.940 05:13:13 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:23:58.940 ************************************ 00:23:58.940 START TEST iscsi_tgt_login_redirection 00:23:58.940 ************************************ 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:23:58.940 * Looking for test storage... 00:23:58.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:23:58.940 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@12 -- # iscsitestinit 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@14 -- # NULL_BDEV_SIZE=64 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@17 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@20 -- # rpc_addr1=/var/tmp/spdk0.sock 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@21 -- # rpc_addr2=/var/tmp/spdk1.sock 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@25 -- # timing_enter start_iscsi_tgts 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@28 -- # pid1=82923 00:23:58.941 Process pid: 82923 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@29 -- # echo 'Process pid: 82923' 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@32 -- # pid2=82924 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@27 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -i 0 -m 0x1 --wait-for-rpc 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@33 -- # echo 'Process pid: 82924' 00:23:58.941 Process pid: 82924 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@35 -- # trap 'killprocess $pid1; killprocess $pid2; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@37 -- # waitforlisten 82923 /var/tmp/spdk0.sock 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@829 -- # '[' -z 82923 ']' 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk0.sock 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -i 1 -m 0x2 --wait-for-rpc 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.941 05:13:13 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:23:58.941 [2024-07-24 05:13:13.388953] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:23:58.941 [2024-07-24 05:13:13.389127] [ DPDK EAL parameters: iscsi -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:58.941 [2024-07-24 05:13:13.404145] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:23:58.941 [2024-07-24 05:13:13.404332] [ DPDK EAL parameters: iscsi -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.200 [2024-07-24 05:13:13.578738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.200 [2024-07-24 05:13:13.597784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.200 [2024-07-24 05:13:13.821717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.457 [2024-07-24 05:13:13.898531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.714 05:13:14 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.714 05:13:14 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@862 -- # return 0 00:23:59.714 05:13:14 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_set_options -w 0 -o 30 -a 16 00:23:59.997 05:13:14 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock framework_start_init 00:24:00.265 [2024-07-24 05:13:14.825217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:01.203 iscsi_tgt_1 is listening. 00:24:01.203 05:13:15 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@40 -- # echo 'iscsi_tgt_1 is listening.' 00:24:01.203 05:13:15 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@42 -- # waitforlisten 82924 /var/tmp/spdk1.sock 00:24:01.203 05:13:15 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@829 -- # '[' -z 82924 ']' 00:24:01.203 05:13:15 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk1.sock 00:24:01.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:24:01.203 05:13:15 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.203 05:13:15 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:24:01.203 05:13:15 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.203 05:13:15 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:24:01.203 05:13:15 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.203 05:13:15 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@862 -- # return 0 00:24:01.203 05:13:15 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_set_options -w 0 -o 30 -a 16 00:24:01.462 05:13:15 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock framework_start_init 00:24:01.720 [2024-07-24 05:13:16.346908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:02.656 iscsi_tgt_2 is listening. 00:24:02.656 05:13:17 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@45 -- # echo 'iscsi_tgt_2 is listening.' 00:24:02.656 05:13:17 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@47 -- # timing_exit start_iscsi_tgts 00:24:02.656 05:13:17 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:02.656 05:13:17 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:24:02.656 05:13:17 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:24:02.914 05:13:17 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 10.0.0.1:3260 00:24:03.172 05:13:17 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock bdev_null_create Null0 64 512 00:24:03.172 Null0 00:24:03.172 05:13:17 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:24:03.431 05:13:17 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:24:03.689 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 10.0.0.3:3260 -p 00:24:03.947 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock bdev_null_create Null0 64 512 00:24:04.205 Null0 00:24:04.205 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:24:04.205 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@67 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:24:04.205 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:24:04.205 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@68 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:24:04.205 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:24:04.205 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:24:04.205 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@69 -- # waitforiscsidevices 1 00:24:04.205 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@116 -- # local num=1 00:24:04.205 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:24:04.205 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:24:04.205 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:24:04.205 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:24:04.206 [2024-07-24 05:13:18.837590] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:04.464 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # n=1 00:24:04.464 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:24:04.464 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@123 -- # return 0 00:24:04.464 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@72 -- # fiopid=83033 00:24:04.464 FIO pid: 83033 00:24:04.464 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@73 -- # echo 'FIO pid: 83033' 00:24:04.465 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@75 -- # trap 'iscsicleanup; killprocess $pid1; killprocess $pid2; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:24:04.465 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t randrw -r 15 00:24:04.465 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:24:04.465 05:13:18 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # jq length 00:24:04.465 [global] 00:24:04.465 thread=1 00:24:04.465 invalidate=1 00:24:04.465 rw=randrw 00:24:04.465 time_based=1 00:24:04.465 runtime=15 00:24:04.465 ioengine=libaio 00:24:04.465 direct=1 00:24:04.465 bs=512 00:24:04.465 iodepth=1 00:24:04.465 norandommap=1 00:24:04.465 numjobs=1 00:24:04.465 00:24:04.465 [job0] 00:24:04.465 filename=/dev/sda 00:24:04.465 queue_depth set to 113 (sda) 00:24:04.465 job0: (g=0): rw=randrw, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:24:04.465 fio-3.35 00:24:04.465 Starting 1 thread 00:24:04.465 [2024-07-24 05:13:19.015041] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:04.465 05:13:19 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # '[' 1 = 1 ']' 00:24:04.465 05:13:19 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:24:04.465 05:13:19 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # jq length 00:24:04.724 05:13:19 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # '[' 0 = 0 ']' 00:24:04.724 05:13:19 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 -a 10.0.0.3 -p 3260 00:24:04.982 05:13:19 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:24:04.982 05:13:19 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@85 -- # sleep 5 00:24:10.253 05:13:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:24:10.253 05:13:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # jq length 00:24:10.253 05:13:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # '[' 0 = 0 ']' 00:24:10.253 05:13:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # jq length 00:24:10.253 05:13:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:24:10.511 05:13:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # '[' 1 = 1 ']' 00:24:10.511 05:13:24 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 00:24:10.769 05:13:25 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:24:11.027 05:13:25 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@93 -- # sleep 5 00:24:16.299 05:13:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:24:16.299 05:13:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # jq length 00:24:16.299 05:13:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # '[' 1 = 1 ']' 00:24:16.299 05:13:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:24:16.299 05:13:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # jq length 00:24:16.299 05:13:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # '[' 0 = 0 ']' 00:24:16.299 05:13:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@98 -- # wait 83033 00:24:19.583 [2024-07-24 05:13:34.122611] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:19.583 00:24:19.583 job0: (groupid=0, jobs=1): err= 0: pid=83061: Wed Jul 24 05:13:34 2024 00:24:19.583 read: IOPS=6049, BW=3025KiB/s (3097kB/s)(44.3MiB/15001msec) 00:24:19.583 slat (usec): min=3, max=126, avg= 5.43, stdev= 1.71 00:24:19.583 clat (usec): min=2, max=2006.3k, avg=74.63, stdev=6659.64 00:24:19.583 lat (usec): min=42, max=2006.3k, avg=80.06, stdev=6659.64 00:24:19.583 clat percentiles (usec): 00:24:19.583 | 1.00th=[ 41], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 48], 00:24:19.583 | 30.00th=[ 49], 40.00th=[ 50], 50.00th=[ 50], 60.00th=[ 53], 00:24:19.583 | 70.00th=[ 56], 80.00th=[ 57], 90.00th=[ 63], 95.00th=[ 69], 00:24:19.583 | 99.00th=[ 81], 99.50th=[ 86], 99.90th=[ 149], 99.95th=[ 178], 00:24:19.583 | 99.99th=[ 310] 00:24:19.583 bw ( KiB/s): min= 225, max= 4538, per=100.00%, avg=3746.43, stdev=1087.99, samples=23 00:24:19.583 iops : min= 450, max= 9076, avg=7492.87, stdev=2175.99, samples=23 00:24:19.583 write: IOPS=6029, BW=3015KiB/s (3087kB/s)(44.2MiB/15001msec); 0 zone resets 00:24:19.583 slat (usec): min=3, max=280, avg= 5.35, stdev= 2.17 00:24:19.583 clat (nsec): min=1480, max=2008.2M, avg=79236.41, stdev=6676937.22 00:24:19.583 lat (usec): min=46, max=2008.2k, avg=84.58, stdev=6676.94 00:24:19.583 clat percentiles (usec): 00:24:19.583 | 1.00th=[ 45], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:24:19.583 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 53], 60.00th=[ 58], 00:24:19.583 | 70.00th=[ 62], 80.00th=[ 63], 90.00th=[ 71], 95.00th=[ 75], 00:24:19.583 | 99.00th=[ 89], 99.50th=[ 95], 99.90th=[ 155], 99.95th=[ 184], 00:24:19.583 | 99.99th=[ 330] 00:24:19.583 bw ( KiB/s): min= 182, max= 4528, per=100.00%, avg=3731.17, stdev=1080.13, samples=23 00:24:19.583 iops : min= 364, max= 9056, avg=7462.35, stdev=2160.27, samples=23 00:24:19.583 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=34.88% 00:24:19.583 lat (usec) : 100=64.84%, 250=0.25%, 500=0.02%, 750=0.01%, 1000=0.01% 00:24:19.583 lat (msec) : 4=0.01%, >=2000=0.01% 00:24:19.583 cpu : usr=2.27%, sys=7.35%, ctx=181228, majf=0, minf=1 00:24:19.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:19.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:19.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:19.583 issued rwts: total=90752,90455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:19.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:19.583 00:24:19.583 Run status group 0 (all jobs): 00:24:19.583 READ: bw=3025KiB/s (3097kB/s), 3025KiB/s-3025KiB/s (3097kB/s-3097kB/s), io=44.3MiB (46.5MB), run=15001-15001msec 00:24:19.583 WRITE: bw=3015KiB/s (3087kB/s), 3015KiB/s-3015KiB/s (3087kB/s-3087kB/s), io=44.2MiB (46.3MB), run=15001-15001msec 00:24:19.583 00:24:19.583 Disk stats (read/write): 00:24:19.583 sda: ios=89700/89391, merge=0/0, ticks=6832/7216, in_queue=14049, util=99.46% 00:24:19.583 Cleaning up iSCSI connection 00:24:19.583 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@100 -- # trap - SIGINT SIGTERM EXIT 00:24:19.583 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@102 -- # iscsicleanup 00:24:19.583 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:24:19.583 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:24:19.841 Logging out of session [sid: 38, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:24:19.841 Logout of [sid: 38, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:24:19.841 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:24:19.841 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@983 -- # rm -rf 00:24:19.841 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@103 -- # killprocess 82923 00:24:19.841 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@948 -- # '[' -z 82923 ']' 00:24:19.841 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@952 -- # kill -0 82923 00:24:19.841 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # uname 00:24:19.841 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.841 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82923 00:24:19.841 killing process with pid 82923 00:24:19.841 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:19.841 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:19.841 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82923' 00:24:19.841 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@967 -- # kill 82923 00:24:19.841 05:13:34 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@972 -- # wait 82923 00:24:22.378 05:13:36 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@104 -- # killprocess 82924 00:24:22.378 05:13:36 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@948 -- # '[' -z 82924 ']' 00:24:22.378 05:13:36 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@952 -- # kill -0 82924 00:24:22.378 05:13:36 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # uname 00:24:22.378 05:13:36 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.378 05:13:36 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82924 00:24:22.378 05:13:36 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:22.378 05:13:36 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:22.378 killing process with pid 82924 00:24:22.378 05:13:36 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82924' 00:24:22.378 05:13:36 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@967 -- # kill 82924 00:24:22.378 05:13:36 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@972 -- # wait 82924 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@105 -- # iscsitestfini 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:24:25.669 00:24:25.669 real 0m26.446s 00:24:25.669 user 0m48.340s 00:24:25.669 sys 0m6.367s 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:24:25.669 ************************************ 00:24:25.669 END TEST iscsi_tgt_login_redirection 00:24:25.669 ************************************ 00:24:25.669 05:13:39 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@42 -- # run_test iscsi_tgt_digests /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:24:25.669 05:13:39 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:25.669 05:13:39 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.669 05:13:39 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:24:25.669 ************************************ 00:24:25.669 START TEST iscsi_tgt_digests 00:24:25.669 ************************************ 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:24:25.669 * Looking for test storage... 00:24:25.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@11 -- # iscsitestinit 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@49 -- # MALLOC_BDEV_SIZE=64 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@52 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@54 -- # timing_enter start_iscsi_tgt 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@57 -- # pid=83365 00:24:25.669 Process pid: 83365 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@58 -- # echo 'Process pid: 83365' 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@60 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@62 -- # waitforlisten 83365 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@56 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@829 -- # '[' -z 83365 ']' 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.669 05:13:39 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:24:25.669 [2024-07-24 05:13:39.895403] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:24:25.669 [2024-07-24 05:13:39.895586] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83365 ] 00:24:25.669 [2024-07-24 05:13:40.083972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:25.928 [2024-07-24 05:13:40.398781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.928 [2024-07-24 05:13:40.398974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:25.928 [2024-07-24 05:13:40.399116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.928 [2024-07-24 05:13:40.399135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:26.189 05:13:40 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:26.189 05:13:40 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@862 -- # return 0 00:24:26.189 05:13:40 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@63 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:24:26.189 05:13:40 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.189 05:13:40 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:24:26.189 05:13:40 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.189 05:13:40 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@64 -- # rpc_cmd framework_start_init 00:24:26.189 05:13:40 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.189 05:13:40 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:24:26.451 [2024-07-24 05:13:41.019181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.386 iscsi_tgt is listening. Running tests... 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@65 -- # echo 'iscsi_tgt is listening. Running tests...' 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@67 -- # timing_exit start_iscsi_tgt 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@69 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@70 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@71 -- # rpc_cmd bdev_malloc_create 64 512 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:24:27.386 Malloc0 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@76 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.386 05:13:41 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@77 -- # sleep 1 00:24:28.322 05:13:42 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@79 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:24:28.322 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.DataDigest' -v None 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # true 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # DataDigestAbility='iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:24:28.323 iscsiadm: Could not execute operation on all records: invalid parameter' 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@84 -- # '[' 'iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:24:28.323 iscsiadm: Could not execute operation on all records: invalid parameterx' '!=' x ']' 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@85 -- # run_test iscsi_tgt_digest iscsi_header_digest_test 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:24:28.323 ************************************ 00:24:28.323 START TEST iscsi_tgt_digest 00:24:28.323 ************************************ 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1123 -- # iscsi_header_digest_test 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@27 -- # node_login_fio_logout 'HeaderDigest -v CRC32C' 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:24:28.323 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:24:28.323 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:24:28.323 [2024-07-24 05:13:42.913167] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:24:28.323 05:13:42 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:24:28.323 [global] 00:24:28.323 thread=1 00:24:28.323 invalidate=1 00:24:28.323 rw=write 00:24:28.323 time_based=1 00:24:28.323 runtime=2 00:24:28.323 ioengine=libaio 00:24:28.323 direct=1 00:24:28.323 bs=512 00:24:28.323 iodepth=1 00:24:28.323 norandommap=1 00:24:28.323 numjobs=1 00:24:28.323 00:24:28.323 [job0] 00:24:28.323 filename=/dev/sda 00:24:28.582 queue_depth set to 113 (sda) 00:24:28.582 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:24:28.582 fio-3.35 00:24:28.582 Starting 1 thread 00:24:28.582 [2024-07-24 05:13:43.088344] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:31.115 [2024-07-24 05:13:45.198928] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:31.115 00:24:31.115 job0: (groupid=0, jobs=1): err= 0: pid=83471: Wed Jul 24 05:13:45 2024 00:24:31.115 write: IOPS=9618, BW=4809KiB/s (4925kB/s)(9623KiB/2001msec); 0 zone resets 00:24:31.115 slat (nsec): min=3787, max=57250, avg=5778.16, stdev=1584.12 00:24:31.115 clat (usec): min=76, max=3658, avg=97.69, stdev=54.28 00:24:31.115 lat (usec): min=81, max=3666, avg=103.47, stdev=54.46 00:24:31.115 clat percentiles (usec): 00:24:31.115 | 1.00th=[ 83], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 90], 00:24:31.115 | 30.00th=[ 92], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 97], 00:24:31.115 | 70.00th=[ 99], 80.00th=[ 103], 90.00th=[ 110], 95.00th=[ 115], 00:24:31.115 | 99.00th=[ 128], 99.50th=[ 135], 99.90th=[ 169], 99.95th=[ 515], 00:24:31.115 | 99.99th=[ 3523] 00:24:31.115 bw ( KiB/s): min= 4377, max= 4977, per=99.25%, avg=4773.33, stdev=343.28, samples=3 00:24:31.115 iops : min= 8754, max= 9954, avg=9546.67, stdev=686.56, samples=3 00:24:31.115 lat (usec) : 100=72.38%, 250=27.54%, 500=0.03%, 750=0.01%, 1000=0.01% 00:24:31.115 lat (msec) : 2=0.01%, 4=0.03% 00:24:31.115 cpu : usr=1.95%, sys=7.95%, ctx=19253, majf=0, minf=1 00:24:31.115 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:31.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.115 issued rwts: total=0,19246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:31.115 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:31.115 00:24:31.115 Run status group 0 (all jobs): 00:24:31.115 WRITE: bw=4809KiB/s (4925kB/s), 4809KiB/s-4809KiB/s (4925kB/s-4925kB/s), io=9623KiB (9854kB), run=2001-2001msec 00:24:31.115 00:24:31.115 Disk stats (read/write): 00:24:31.115 sda: ios=48/18123, merge=0/0, ticks=9/1759, in_queue=1769, util=95.47% 00:24:31.115 05:13:45 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:24:31.115 [global] 00:24:31.115 thread=1 00:24:31.115 invalidate=1 00:24:31.115 rw=read 00:24:31.115 time_based=1 00:24:31.115 runtime=2 00:24:31.115 ioengine=libaio 00:24:31.115 direct=1 00:24:31.115 bs=512 00:24:31.115 iodepth=1 00:24:31.115 norandommap=1 00:24:31.115 numjobs=1 00:24:31.115 00:24:31.115 [job0] 00:24:31.115 filename=/dev/sda 00:24:31.115 queue_depth set to 113 (sda) 00:24:31.115 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:24:31.115 fio-3.35 00:24:31.115 Starting 1 thread 00:24:33.018 00:24:33.018 job0: (groupid=0, jobs=1): err= 0: pid=83524: Wed Jul 24 05:13:47 2024 00:24:33.018 read: IOPS=11.1k, BW=5560KiB/s (5693kB/s)(10.9MiB/2001msec) 00:24:33.018 slat (nsec): min=3610, max=58482, avg=5537.46, stdev=1562.24 00:24:33.018 clat (usec): min=65, max=402, avg=83.91, stdev= 8.94 00:24:33.018 lat (usec): min=71, max=424, avg=89.44, stdev= 9.29 00:24:33.018 clat percentiles (usec): 00:24:33.018 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 78], 00:24:33.018 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 83], 60.00th=[ 85], 00:24:33.018 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 95], 95.00th=[ 99], 00:24:33.018 | 99.00th=[ 110], 99.50th=[ 115], 99.90th=[ 129], 99.95th=[ 141], 00:24:33.018 | 99.99th=[ 367] 00:24:33.018 bw ( KiB/s): min= 5426, max= 5621, per=99.90%, avg=5554.00, stdev=110.89, samples=3 00:24:33.018 iops : min=10852, max=11242, avg=11108.00, stdev=221.78, samples=3 00:24:33.018 lat (usec) : 100=95.65%, 250=4.33%, 500=0.01% 00:24:33.018 cpu : usr=1.95%, sys=9.55%, ctx=22253, majf=0, minf=1 00:24:33.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:33.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:33.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:33.018 issued rwts: total=22250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:33.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:33.018 00:24:33.018 Run status group 0 (all jobs): 00:24:33.018 READ: bw=5560KiB/s (5693kB/s), 5560KiB/s-5560KiB/s (5693kB/s-5693kB/s), io=10.9MiB (11.4MB), run=2001-2001msec 00:24:33.018 00:24:33.018 Disk stats (read/write): 00:24:33.018 sda: ios=21022/0, merge=0/0, ticks=1739/0, in_queue=1739, util=95.13% 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:24:33.018 Logging out of session [sid: 39, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:24:33.018 Logout of [sid: 39, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:24:33.018 iscsiadm: No active sessions. 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@31 -- # node_login_fio_logout 'HeaderDigest -v CRC32C,None' 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C,None 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:24:33.018 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:24:33.018 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:24:33.018 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:24:33.018 [2024-07-24 05:13:47.647874] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:33.276 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:24:33.276 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:24:33.276 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:24:33.276 05:13:47 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:24:33.276 [global] 00:24:33.276 thread=1 00:24:33.276 invalidate=1 00:24:33.276 rw=write 00:24:33.276 time_based=1 00:24:33.276 runtime=2 00:24:33.276 ioengine=libaio 00:24:33.276 direct=1 00:24:33.276 bs=512 00:24:33.276 iodepth=1 00:24:33.276 norandommap=1 00:24:33.276 numjobs=1 00:24:33.276 00:24:33.276 [job0] 00:24:33.276 filename=/dev/sda 00:24:33.276 queue_depth set to 113 (sda) 00:24:33.276 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:24:33.276 fio-3.35 00:24:33.276 Starting 1 thread 00:24:33.276 [2024-07-24 05:13:47.828424] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:35.809 [2024-07-24 05:13:49.940088] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:24:35.809 00:24:35.809 job0: (groupid=0, jobs=1): err= 0: pid=83589: Wed Jul 24 05:13:49 2024 00:24:35.809 write: IOPS=13.1k, BW=6551KiB/s (6708kB/s)(12.8MiB/2001msec); 0 zone resets 00:24:35.809 slat (nsec): min=3465, max=55091, avg=5361.93, stdev=3403.77 00:24:35.809 clat (usec): min=36, max=2024, avg=70.33, stdev=20.14 00:24:35.809 lat (usec): min=61, max=2030, avg=75.69, stdev=20.19 00:24:35.809 clat percentiles (usec): 00:24:35.809 | 1.00th=[ 52], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 66], 00:24:35.809 | 30.00th=[ 68], 40.00th=[ 69], 50.00th=[ 70], 60.00th=[ 71], 00:24:35.809 | 70.00th=[ 73], 80.00th=[ 75], 90.00th=[ 79], 95.00th=[ 83], 00:24:35.809 | 99.00th=[ 93], 99.50th=[ 99], 99.90th=[ 114], 99.95th=[ 137], 00:24:35.809 | 99.99th=[ 1369] 00:24:35.809 bw ( KiB/s): min= 6120, max= 6696, per=99.26%, avg=6503.00, stdev=331.69, samples=3 00:24:35.809 iops : min=12240, max=13392, avg=13006.00, stdev=663.38, samples=3 00:24:35.809 lat (usec) : 50=0.50%, 100=99.05%, 250=0.42%, 500=0.01%, 750=0.01% 00:24:35.809 lat (usec) : 1000=0.01% 00:24:35.809 lat (msec) : 2=0.01%, 4=0.01% 00:24:35.809 cpu : usr=3.35%, sys=9.05%, ctx=27674, majf=0, minf=1 00:24:35.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:35.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.809 issued rwts: total=0,26218,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.809 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:35.809 00:24:35.809 Run status group 0 (all jobs): 00:24:35.809 WRITE: bw=6551KiB/s (6708kB/s), 6551KiB/s-6551KiB/s (6708kB/s-6708kB/s), io=12.8MiB (13.4MB), run=2001-2001msec 00:24:35.809 00:24:35.809 Disk stats (read/write): 00:24:35.809 sda: ios=48/24709, merge=0/0, ticks=7/1704, in_queue=1711, util=95.22% 00:24:35.809 05:13:49 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:24:35.809 [global] 00:24:35.809 thread=1 00:24:35.809 invalidate=1 00:24:35.809 rw=read 00:24:35.809 time_based=1 00:24:35.809 runtime=2 00:24:35.809 ioengine=libaio 00:24:35.809 direct=1 00:24:35.809 bs=512 00:24:35.809 iodepth=1 00:24:35.809 norandommap=1 00:24:35.809 numjobs=1 00:24:35.809 00:24:35.809 [job0] 00:24:35.809 filename=/dev/sda 00:24:35.809 queue_depth set to 113 (sda) 00:24:35.809 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:24:35.809 fio-3.35 00:24:35.809 Starting 1 thread 00:24:37.714 00:24:37.714 job0: (groupid=0, jobs=1): err= 0: pid=83642: Wed Jul 24 05:13:52 2024 00:24:37.714 read: IOPS=14.0k, BW=7024KiB/s (7193kB/s)(13.7MiB/2001msec) 00:24:37.714 slat (usec): min=3, max=153, avg= 4.96, stdev= 2.70 00:24:37.714 clat (usec): min=2, max=249, avg=65.55, stdev= 8.44 00:24:37.714 lat (usec): min=54, max=295, avg=70.51, stdev= 9.58 00:24:37.714 clat percentiles (usec): 00:24:37.714 | 1.00th=[ 54], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 59], 00:24:37.714 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 67], 00:24:37.714 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 76], 95.00th=[ 80], 00:24:37.714 | 99.00th=[ 93], 99.50th=[ 99], 99.90th=[ 116], 99.95th=[ 137], 00:24:37.714 | 99.99th=[ 188] 00:24:37.714 bw ( KiB/s): min= 6847, max= 7486, per=100.00%, avg=7165.00, stdev=319.51, samples=3 00:24:37.714 iops : min=13694, max=14972, avg=14330.00, stdev=639.02, samples=3 00:24:37.714 lat (usec) : 4=0.02%, 50=0.11%, 100=99.43%, 250=0.44% 00:24:37.714 cpu : usr=4.45%, sys=11.05%, ctx=28220, majf=0, minf=1 00:24:37.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:37.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:37.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:37.714 issued rwts: total=28110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:37.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:37.714 00:24:37.714 Run status group 0 (all jobs): 00:24:37.714 READ: bw=7024KiB/s (7193kB/s), 7024KiB/s-7024KiB/s (7193kB/s-7193kB/s), io=13.7MiB (14.4MB), run=2001-2001msec 00:24:37.714 00:24:37.714 Disk stats (read/write): 00:24:37.714 sda: ios=26491/0, merge=0/0, ticks=1650/0, in_queue=1650, util=95.13% 00:24:37.714 05:13:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:24:37.973 Logging out of session [sid: 40, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:24:37.973 Logout of [sid: 40, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:24:37.973 iscsiadm: No active sessions. 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:24:37.973 00:24:37.973 real 0m9.506s 00:24:37.973 user 0m0.723s 00:24:37.973 sys 0m1.108s 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@10 -- # set +x 00:24:37.973 ************************************ 00:24:37.973 END TEST iscsi_tgt_digest 00:24:37.973 ************************************ 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:24:37.973 Cleaning up iSCSI connection 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@92 -- # iscsicleanup 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:24:37.973 iscsiadm: No matching sessions found 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@981 -- # true 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@983 -- # rm -rf 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@93 -- # killprocess 83365 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@948 -- # '[' -z 83365 ']' 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@952 -- # kill -0 83365 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@953 -- # uname 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83365 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83365' 00:24:37.973 killing process with pid 83365 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@967 -- # kill 83365 00:24:37.973 05:13:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@972 -- # wait 83365 00:24:40.504 05:13:55 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@94 -- # iscsitestfini 00:24:40.504 05:13:55 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:24:40.504 00:24:40.504 real 0m15.461s 00:24:40.504 user 0m55.211s 00:24:40.504 sys 0m3.642s 00:24:40.504 05:13:55 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:40.504 05:13:55 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:24:40.504 ************************************ 00:24:40.504 END TEST iscsi_tgt_digests 00:24:40.504 ************************************ 00:24:40.763 05:13:55 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@43 -- # run_test iscsi_tgt_fuzz /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:24:40.763 05:13:55 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:40.763 05:13:55 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:40.763 05:13:55 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:24:40.763 ************************************ 00:24:40.763 START TEST iscsi_tgt_fuzz 00:24:40.763 ************************************ 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:24:40.763 * Looking for test storage... 00:24:40.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/fuzz 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@11 -- # iscsitestinit 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@13 -- # '[' -z 10.0.0.1 ']' 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@18 -- # '[' -z 10.0.0.2 ']' 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@23 -- # timing_enter iscsi_fuzz_test 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@25 -- # MALLOC_BDEV_SIZE=64 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@26 -- # MALLOC_BLOCK_SIZE=4096 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@28 -- # TEST_TIMEOUT=1200 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@31 -- # for i in "$@" 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@32 -- # case "$i" in 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@34 -- # TEST_TIMEOUT=30 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@39 -- # timing_enter start_iscsi_tgt 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.763 Process iscsipid: 83772 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@42 -- # iscsipid=83772 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@43 -- # echo 'Process iscsipid: 83772' 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@45 -- # trap 'killprocess $iscsipid; exit 1' SIGINT SIGTERM EXIT 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@47 -- # waitforlisten 83772 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --disable-cpumask-locks --wait-for-rpc 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@829 -- # '[' -z 83772 ']' 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:40.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:40.763 05:13:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.698 05:13:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:41.698 05:13:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@862 -- # return 0 00:24:41.698 05:13:56 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@49 -- # rpc_cmd iscsi_set_options -o 60 -a 16 00:24:41.698 05:13:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.698 05:13:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.698 05:13:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.698 05:13:56 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@50 -- # rpc_cmd framework_start_init 00:24:41.698 05:13:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.698 05:13:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:42.633 iscsi_tgt is listening. Running tests... 00:24:42.633 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.633 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@51 -- # echo 'iscsi_tgt is listening. Running tests...' 00:24:42.633 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@52 -- # timing_exit start_iscsi_tgt 00:24:42.633 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:42.633 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@54 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@55 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@56 -- # rpc_cmd bdev_malloc_create 64 4096 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:42.891 Malloc0 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@57 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.891 05:13:57 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@58 -- # sleep 1 00:24:43.826 05:13:58 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@60 -- # trap 'killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:24:43.826 05:13:58 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/iscsi_fuzz/iscsi_fuzz -m 0xF0 -T 10.0.0.1 -t 30 00:25:15.890 Fuzzing completed. Shutting down the fuzz application. 00:25:15.890 00:25:15.890 device 0x6110000160c0 stats: Sent 13142 valid opcode PDUs, 119966 invalid opcode PDUs. 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@64 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:disk1 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@67 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@71 -- # killprocess 83772 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@948 -- # '[' -z 83772 ']' 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@952 -- # kill -0 83772 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@953 -- # uname 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83772 00:25:15.890 killing process with pid 83772 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83772' 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@967 -- # kill 83772 00:25:15.890 05:14:29 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@972 -- # wait 83772 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@73 -- # iscsitestfini 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@75 -- # timing_exit iscsi_fuzz_test 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:18.426 ************************************ 00:25:18.426 END TEST iscsi_tgt_fuzz 00:25:18.426 ************************************ 00:25:18.426 00:25:18.426 real 0m37.581s 00:25:18.426 user 3m27.384s 00:25:18.426 sys 0m18.806s 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:18.426 05:14:32 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@44 -- # run_test iscsi_tgt_multiconnection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:25:18.426 05:14:32 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:18.426 05:14:32 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:18.426 05:14:32 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:25:18.426 ************************************ 00:25:18.426 START TEST iscsi_tgt_multiconnection 00:25:18.426 ************************************ 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:25:18.426 * Looking for test storage... 00:25:18.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@11 -- # iscsitestinit 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@18 -- # CONNECTION_NUMBER=30 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@40 -- # timing_enter start_iscsi_tgt 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@42 -- # iscsipid=84237 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@43 -- # echo 'iSCSI target launched. pid: 84237' 00:25:18.426 iSCSI target launched. pid: 84237 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@44 -- # trap 'remove_backends; iscsicleanup; killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@46 -- # waitforlisten 84237 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 84237 ']' 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.426 05:14:32 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.686 [2024-07-24 05:14:33.058242] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:25:18.686 [2024-07-24 05:14:33.058408] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84237 ] 00:25:18.686 [2024-07-24 05:14:33.240429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.943 [2024-07-24 05:14:33.538078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.508 05:14:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.508 05:14:33 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:25:19.508 05:14:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 128 00:25:19.508 05:14:33 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:25:20.075 [2024-07-24 05:14:34.517585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:21.010 05:14:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:21.010 05:14:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:25:21.269 05:14:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@50 -- # timing_exit start_iscsi_tgt 00:25:21.269 05:14:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.269 05:14:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.269 05:14:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:25:21.528 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:25:21.787 Creating an iSCSI target node. 00:25:21.787 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@55 -- # echo 'Creating an iSCSI target node.' 00:25:21.787 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs0 -c 1048576 00:25:22.045 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # ls_guid=b1c4338b-f291-4e75-9f23-ce607598f553 00:25:22.045 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@59 -- # get_lvs_free_mb b1c4338b-f291-4e75-9f23-ce607598f553 00:25:22.045 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1362 -- # local lvs_uuid=b1c4338b-f291-4e75-9f23-ce607598f553 00:25:22.045 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1363 -- # local lvs_info 00:25:22.045 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1364 -- # local fc 00:25:22.045 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1365 -- # local cs 00:25:22.045 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1366 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:22.304 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:25:22.304 { 00:25:22.304 "uuid": "b1c4338b-f291-4e75-9f23-ce607598f553", 00:25:22.304 "name": "lvs0", 00:25:22.304 "base_bdev": "Nvme0n1", 00:25:22.304 "total_data_clusters": 5099, 00:25:22.304 "free_clusters": 5099, 00:25:22.304 "block_size": 4096, 00:25:22.304 "cluster_size": 1048576 00:25:22.304 } 00:25:22.304 ]' 00:25:22.304 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="b1c4338b-f291-4e75-9f23-ce607598f553") .free_clusters' 00:25:22.304 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1367 -- # fc=5099 00:25:22.304 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="b1c4338b-f291-4e75-9f23-ce607598f553") .cluster_size' 00:25:22.304 5099 00:25:22.304 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # cs=1048576 00:25:22.304 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1371 -- # free_mb=5099 00:25:22.304 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1372 -- # echo 5099 00:25:22.304 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@60 -- # lvol_bdev_size=169 00:25:22.304 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # seq 1 30 00:25:22.304 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:22.304 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_1 169 00:25:22.562 86e691ae-af89-4aae-ba5d-bd4e0e7f337a 00:25:22.562 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:22.562 05:14:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_2 169 00:25:22.821 9c70f62c-81a2-4582-9650-8ca1ef7c3c05 00:25:22.821 05:14:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:22.821 05:14:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_3 169 00:25:22.821 382e4647-5a10-4ad0-882d-280c62e68eab 00:25:22.821 05:14:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:22.821 05:14:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_4 169 00:25:23.080 e193ff87-f5e9-48f0-9abc-86f7f57675df 00:25:23.080 05:14:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:23.080 05:14:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_5 169 00:25:23.339 4b7f910c-c368-486d-b982-64a3351b0dd0 00:25:23.339 05:14:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:23.339 05:14:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_6 169 00:25:23.339 68e0d436-0a61-4e91-8297-a0d98bd3d771 00:25:23.339 05:14:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:23.339 05:14:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_7 169 00:25:23.598 095f87bf-62c1-443f-9f00-6b1c8a415d4b 00:25:23.598 05:14:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:23.598 05:14:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_8 169 00:25:23.856 850ed500-c906-46ff-ba58-727c6f8ee555 00:25:23.856 05:14:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:23.856 05:14:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_9 169 00:25:23.856 03df8ffd-0d9b-405b-bfbb-e2aeab55eccd 00:25:23.856 05:14:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:23.856 05:14:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_10 169 00:25:24.115 ee08d7b3-7be2-43f2-8b9e-db673dfea834 00:25:24.115 05:14:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:24.115 05:14:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_11 169 00:25:24.374 6b5c87ba-9b0b-43b3-b31b-27f10bbffe35 00:25:24.374 05:14:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:24.374 05:14:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_12 169 00:25:24.374 de46095c-77f0-4366-b7c8-41144b0ccaff 00:25:24.374 05:14:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:24.374 05:14:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_13 169 00:25:24.633 7cdb52f1-6a80-4b23-ab5d-887505c114f6 00:25:24.633 05:14:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:24.633 05:14:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_14 169 00:25:24.891 b9a2b099-e6e7-4737-beef-3d7e12668955 00:25:24.891 05:14:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:24.891 05:14:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_15 169 00:25:25.151 b06d4374-ed25-4817-8586-828556f478f6 00:25:25.151 05:14:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:25.151 05:14:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_16 169 00:25:25.151 bf60e6af-a695-4770-9a0b-c86c633f6b2b 00:25:25.151 05:14:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:25.151 05:14:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_17 169 00:25:25.410 b3e4856a-b323-4595-9466-871f7517b2df 00:25:25.410 05:14:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:25.410 05:14:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_18 169 00:25:25.670 62092dba-01c7-4cf2-91bc-f3333d56e1dd 00:25:25.670 05:14:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:25.670 05:14:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_19 169 00:25:25.929 780942ce-1a2c-405e-9022-d1030f6236ff 00:25:25.929 05:14:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:25.929 05:14:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_20 169 00:25:25.929 c2378b34-dae2-471c-afda-e65ccb749af5 00:25:25.929 05:14:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:25.929 05:14:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_21 169 00:25:26.201 8edfffff-ebed-4385-9919-ff6266d92354 00:25:26.201 05:14:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:26.201 05:14:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_22 169 00:25:26.458 9f5b07be-7c3f-4df0-a3f4-71249c7873fe 00:25:26.458 05:14:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:26.458 05:14:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_23 169 00:25:26.716 b699bd39-f59b-4353-83f0-4bfe7453dc8b 00:25:26.716 05:14:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:26.716 05:14:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_24 169 00:25:26.716 aaf58efe-c52a-4774-bc64-23339e6c03b2 00:25:26.716 05:14:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:26.716 05:14:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_25 169 00:25:26.974 91b79db1-a04d-4ea8-8b80-232aac055bd4 00:25:26.974 05:14:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:26.974 05:14:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_26 169 00:25:27.233 1e60c176-863d-47e6-b966-2680acf105c9 00:25:27.233 05:14:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:27.233 05:14:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_27 169 00:25:27.233 2348a6ae-7ee8-4c19-80ef-918e2d5f6ea1 00:25:27.233 05:14:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:27.233 05:14:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_28 169 00:25:27.491 0e49d408-d70a-4652-aa73-571c30ff6c00 00:25:27.491 05:14:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:27.491 05:14:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_29 169 00:25:27.750 e81eb61e-f80a-463c-a3ba-2d21cff6f396 00:25:27.750 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:27.750 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1c4338b-f291-4e75-9f23-ce607598f553 lbd_30 169 00:25:27.750 742e26ff-7670-4155-97ed-272893c544f4 00:25:27.750 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # seq 1 30 00:25:27.750 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:27.750 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_1:0 00:25:27.750 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias lvs0/lbd_1:0 1:2 256 -d 00:25:28.009 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:28.009 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_2:0 00:25:28.009 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias lvs0/lbd_2:0 1:2 256 -d 00:25:28.267 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:28.267 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_3:0 00:25:28.267 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias lvs0/lbd_3:0 1:2 256 -d 00:25:28.267 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:28.267 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_4:0 00:25:28.267 05:14:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias lvs0/lbd_4:0 1:2 256 -d 00:25:28.526 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:28.526 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_5:0 00:25:28.526 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias lvs0/lbd_5:0 1:2 256 -d 00:25:28.784 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:28.784 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_6:0 00:25:28.784 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias lvs0/lbd_6:0 1:2 256 -d 00:25:29.043 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:29.043 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_7:0 00:25:29.043 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias lvs0/lbd_7:0 1:2 256 -d 00:25:29.302 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:29.302 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_8:0 00:25:29.302 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias lvs0/lbd_8:0 1:2 256 -d 00:25:29.302 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:29.302 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_9:0 00:25:29.302 05:14:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias lvs0/lbd_9:0 1:2 256 -d 00:25:29.560 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:29.560 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_10:0 00:25:29.560 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias lvs0/lbd_10:0 1:2 256 -d 00:25:29.817 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:29.817 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_11:0 00:25:29.817 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target11 Target11_alias lvs0/lbd_11:0 1:2 256 -d 00:25:30.076 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:30.076 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_12:0 00:25:30.076 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target12 Target12_alias lvs0/lbd_12:0 1:2 256 -d 00:25:30.076 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:30.076 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_13:0 00:25:30.076 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target13 Target13_alias lvs0/lbd_13:0 1:2 256 -d 00:25:30.335 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:30.335 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_14:0 00:25:30.335 05:14:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target14 Target14_alias lvs0/lbd_14:0 1:2 256 -d 00:25:30.594 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:30.594 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_15:0 00:25:30.594 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target15 Target15_alias lvs0/lbd_15:0 1:2 256 -d 00:25:30.852 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:30.852 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_16:0 00:25:30.852 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target16 Target16_alias lvs0/lbd_16:0 1:2 256 -d 00:25:30.852 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:30.852 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_17:0 00:25:30.852 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target17 Target17_alias lvs0/lbd_17:0 1:2 256 -d 00:25:31.111 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:31.111 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_18:0 00:25:31.111 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target18 Target18_alias lvs0/lbd_18:0 1:2 256 -d 00:25:31.369 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:31.369 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_19:0 00:25:31.369 05:14:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target19 Target19_alias lvs0/lbd_19:0 1:2 256 -d 00:25:31.628 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:31.628 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_20:0 00:25:31.628 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target20 Target20_alias lvs0/lbd_20:0 1:2 256 -d 00:25:31.628 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:31.628 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_21:0 00:25:31.628 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target21 Target21_alias lvs0/lbd_21:0 1:2 256 -d 00:25:31.886 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:31.886 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_22:0 00:25:31.886 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target22 Target22_alias lvs0/lbd_22:0 1:2 256 -d 00:25:31.886 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:31.886 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_23:0 00:25:31.886 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target23 Target23_alias lvs0/lbd_23:0 1:2 256 -d 00:25:32.145 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:32.145 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_24:0 00:25:32.145 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target24 Target24_alias lvs0/lbd_24:0 1:2 256 -d 00:25:32.403 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:32.403 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_25:0 00:25:32.403 05:14:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target25 Target25_alias lvs0/lbd_25:0 1:2 256 -d 00:25:32.403 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:32.403 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_26:0 00:25:32.403 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target26 Target26_alias lvs0/lbd_26:0 1:2 256 -d 00:25:32.661 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:32.661 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_27:0 00:25:32.661 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target27 Target27_alias lvs0/lbd_27:0 1:2 256 -d 00:25:32.920 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:32.920 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_28:0 00:25:32.920 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target28 Target28_alias lvs0/lbd_28:0 1:2 256 -d 00:25:33.179 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:33.179 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_29:0 00:25:33.179 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target29 Target29_alias lvs0/lbd_29:0 1:2 256 -d 00:25:33.179 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:33.179 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_30:0 00:25:33.179 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target30 Target30_alias lvs0/lbd_30:0 1:2 256 -d 00:25:33.437 05:14:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@69 -- # sleep 1 00:25:34.373 Logging into iSCSI target. 00:25:34.373 05:14:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@71 -- # echo 'Logging into iSCSI target.' 00:25:34.373 05:14:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@72 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target16 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target17 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target18 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target19 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target20 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target21 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target22 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target23 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target24 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target25 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target26 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target27 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target28 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target29 00:25:34.373 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target30 00:25:34.373 05:14:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@73 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:25:34.631 [2024-07-24 05:14:49.034157] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.631 [2024-07-24 05:14:49.058630] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.631 [2024-07-24 05:14:49.059013] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.631 [2024-07-24 05:14:49.069440] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.631 [2024-07-24 05:14:49.096198] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.631 [2024-07-24 05:14:49.121409] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.631 [2024-07-24 05:14:49.143748] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.631 [2024-07-24 05:14:49.161573] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.631 [2024-07-24 05:14:49.170159] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.631 [2024-07-24 05:14:49.188937] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.631 [2024-07-24 05:14:49.219794] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.631 [2024-07-24 05:14:49.261166] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.889 [2024-07-24 05:14:49.263380] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:25:34.889 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:25:34.889 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:25:34.889 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:25:34.889 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:25:34.889 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:25:34.889 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:25:34.889 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:25:34.889 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:25:34.889 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:25:34.889 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:25:34.889 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:25:34.889 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:25:34.889 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:25:34.889 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:25:34.889 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, por[2024-07-24 05:14:49.297668] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.889 [2024-07-24 05:14:49.316422] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.889 [2024-07-24 05:14:49.342028] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.889 [2024-07-24 05:14:49.371675] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.889 [2024-07-24 05:14:49.398336] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.889 [2024-07-24 05:14:49.425741] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.889 [2024-07-24 05:14:49.448310] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.889 [2024-07-24 05:14:49.481480] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:34.889 [2024-07-24 05:14:49.514119] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:35.147 [2024-07-24 05:14:49.545809] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:35.147 [2024-07-24 05:14:49.578306] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:35.147 [2024-07-24 05:14:49.611697] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:35.147 [2024-07-24 05:14:49.639949] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:35.148 [2024-07-24 05:14:49.674761] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:35.148 [2024-07-24 05:14:49.706416] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:35.148 [2024-07-24 05:14:49.744785] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:35.148 tal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:25:35.148 Login to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:25:35.148 05:14:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@74 -- # waitforiscsidevices 30 00:25:35.148 05:14:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@116 -- # local num=30 00:25:35.148 05:14:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:25:35.148 05:14:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:25:35.148 05:14:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:25:35.148 05:14:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:25:35.148 [2024-07-24 05:14:49.762223] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:35.406 05:14:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # n=30 00:25:35.406 05:14:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@120 -- # '[' 30 -ne 30 ']' 00:25:35.406 05:14:49 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@123 -- # return 0 00:25:35.406 05:14:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@76 -- # echo 'Running FIO' 00:25:35.406 Running FIO 00:25:35.406 05:14:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 64 -t randrw -r 5 00:25:35.406 [global] 00:25:35.406 thread=1 00:25:35.406 invalidate=1 00:25:35.406 rw=randrw 00:25:35.406 time_based=1 00:25:35.406 runtime=5 00:25:35.406 ioengine=libaio 00:25:35.406 direct=1 00:25:35.406 bs=131072 00:25:35.406 iodepth=64 00:25:35.406 norandommap=1 00:25:35.406 numjobs=1 00:25:35.406 00:25:35.406 [job0] 00:25:35.406 filename=/dev/sda 00:25:35.406 [job1] 00:25:35.406 filename=/dev/sdb 00:25:35.406 [job2] 00:25:35.406 filename=/dev/sdc 00:25:35.406 [job3] 00:25:35.406 filename=/dev/sdd 00:25:35.406 [job4] 00:25:35.406 filename=/dev/sde 00:25:35.406 [job5] 00:25:35.406 filename=/dev/sdf 00:25:35.406 [job6] 00:25:35.406 filename=/dev/sdg 00:25:35.406 [job7] 00:25:35.406 filename=/dev/sdh 00:25:35.406 [job8] 00:25:35.406 filename=/dev/sdi 00:25:35.406 [job9] 00:25:35.406 filename=/dev/sdj 00:25:35.406 [job10] 00:25:35.406 filename=/dev/sdk 00:25:35.406 [job11] 00:25:35.406 filename=/dev/sdl 00:25:35.406 [job12] 00:25:35.406 filename=/dev/sdm 00:25:35.406 [job13] 00:25:35.406 filename=/dev/sdn 00:25:35.406 [job14] 00:25:35.406 filename=/dev/sdo 00:25:35.406 [job15] 00:25:35.406 filename=/dev/sdp 00:25:35.406 [job16] 00:25:35.406 filename=/dev/sdq 00:25:35.406 [job17] 00:25:35.406 filename=/dev/sdr 00:25:35.406 [job18] 00:25:35.406 filename=/dev/sds 00:25:35.406 [job19] 00:25:35.406 filename=/dev/sdt 00:25:35.406 [job20] 00:25:35.406 filename=/dev/sdu 00:25:35.406 [job21] 00:25:35.406 filename=/dev/sdv 00:25:35.406 [job22] 00:25:35.406 filename=/dev/sdw 00:25:35.406 [job23] 00:25:35.406 filename=/dev/sdx 00:25:35.406 [job24] 00:25:35.406 filename=/dev/sdy 00:25:35.406 [job25] 00:25:35.406 filename=/dev/sdz 00:25:35.406 [job26] 00:25:35.406 filename=/dev/sdaa 00:25:35.406 [job27] 00:25:35.406 filename=/dev/sdab 00:25:35.406 [job28] 00:25:35.406 filename=/dev/sdac 00:25:35.406 [job29] 00:25:35.406 filename=/dev/sdad 00:25:35.972 queue_depth set to 113 (sda) 00:25:35.972 queue_depth set to 113 (sdb) 00:25:35.972 queue_depth set to 113 (sdc) 00:25:35.972 queue_depth set to 113 (sdd) 00:25:35.972 queue_depth set to 113 (sde) 00:25:35.972 queue_depth set to 113 (sdf) 00:25:35.972 queue_depth set to 113 (sdg) 00:25:36.230 queue_depth set to 113 (sdh) 00:25:36.230 queue_depth set to 113 (sdi) 00:25:36.230 queue_depth set to 113 (sdj) 00:25:36.230 queue_depth set to 113 (sdk) 00:25:36.230 queue_depth set to 113 (sdl) 00:25:36.230 queue_depth set to 113 (sdm) 00:25:36.231 queue_depth set to 113 (sdn) 00:25:36.231 queue_depth set to 113 (sdo) 00:25:36.231 queue_depth set to 113 (sdp) 00:25:36.231 queue_depth set to 113 (sdq) 00:25:36.231 queue_depth set to 113 (sdr) 00:25:36.231 queue_depth set to 113 (sds) 00:25:36.231 queue_depth set to 113 (sdt) 00:25:36.499 queue_depth set to 113 (sdu) 00:25:36.499 queue_depth set to 113 (sdv) 00:25:36.499 queue_depth set to 113 (sdw) 00:25:36.499 queue_depth set to 113 (sdx) 00:25:36.499 queue_depth set to 113 (sdy) 00:25:36.499 queue_depth set to 113 (sdz) 00:25:36.499 queue_depth set to 113 (sdaa) 00:25:36.499 queue_depth set to 113 (sdab) 00:25:36.499 queue_depth set to 113 (sdac) 00:25:36.499 queue_depth set to 113 (sdad) 00:25:36.768 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.768 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.768 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.768 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.768 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job16: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job17: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job18: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job19: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job20: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job21: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job22: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job23: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job24: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job25: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job26: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job27: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job28: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 job29: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:25:36.769 fio-3.35 00:25:36.769 Starting 30 threads 00:25:36.769 [2024-07-24 05:14:51.247631] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.251925] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.256041] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.259899] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.262637] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.265407] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.268094] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.270880] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.273500] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.276421] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.279171] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.281811] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.284429] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.287342] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.290057] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.292665] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.295432] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.297959] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.300717] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.303283] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.305867] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.308638] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.311240] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.313784] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.316349] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.319216] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.322182] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.324873] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.327618] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:36.769 [2024-07-24 05:14:51.330256] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.286609] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.302179] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.307596] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.310461] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.313224] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.316268] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.318977] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.321553] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.324315] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.330953] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.333758] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.336298] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.338838] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.341411] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.344031] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.347089] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 [2024-07-24 05:14:57.350226] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.337 00:25:43.337 job0: (groupid=0, jobs=1): err= 0: pid=85139: Wed Jul 24 05:14:57 2024 00:25:43.337 read: IOPS=80, BW=10.1MiB/s (10.6MB/s)(54.6MiB/5397msec) 00:25:43.337 slat (nsec): min=6902, max=82704, avg=27589.25, stdev=12480.00 00:25:43.337 clat (msec): min=7, max=424, avg=53.10, stdev=34.15 00:25:43.337 lat (msec): min=7, max=424, avg=53.13, stdev=34.15 00:25:43.337 clat percentiles (msec): 00:25:43.337 | 1.00th=[ 16], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.337 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.338 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 62], 95.00th=[ 105], 00:25:43.338 | 99.00th=[ 194], 99.50th=[ 201], 99.90th=[ 426], 99.95th=[ 426], 00:25:43.338 | 99.99th=[ 426] 00:25:43.338 bw ( KiB/s): min= 6912, max=18944, per=3.33%, avg=11133.70, stdev=3366.33, samples=10 00:25:43.338 iops : min= 54, max= 148, avg=86.90, stdev=26.29, samples=10 00:25:43.338 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.2MiB/5397msec); 0 zone resets 00:25:43.338 slat (usec): min=10, max=104, avg=33.96, stdev=13.31 00:25:43.338 clat (msec): min=169, max=1049, avg=678.75, stdev=102.40 00:25:43.338 lat (msec): min=169, max=1049, avg=678.78, stdev=102.40 00:25:43.338 clat percentiles (msec): 00:25:43.338 | 1.00th=[ 279], 5.00th=[ 460], 10.00th=[ 592], 20.00th=[ 667], 00:25:43.338 | 30.00th=[ 676], 40.00th=[ 684], 50.00th=[ 693], 60.00th=[ 701], 00:25:43.338 | 70.00th=[ 709], 80.00th=[ 718], 90.00th=[ 726], 95.00th=[ 818], 00:25:43.338 | 99.00th=[ 969], 99.50th=[ 995], 99.90th=[ 1053], 99.95th=[ 1053], 00:25:43.338 | 99.99th=[ 1053] 00:25:43.338 bw ( KiB/s): min= 5376, max=11520, per=3.15%, avg=10570.70, stdev=1850.29, samples=10 00:25:43.338 iops : min= 42, max= 90, avg=82.50, stdev=14.46, samples=10 00:25:43.338 lat (msec) : 10=0.33%, 20=0.33%, 50=41.05%, 100=3.62%, 250=2.85% 00:25:43.338 lat (msec) : 500=2.96%, 750=45.77%, 1000=2.85%, 2000=0.22% 00:25:43.338 cpu : usr=0.17%, sys=0.50%, ctx=519, majf=0, minf=1 00:25:43.338 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:25:43.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.338 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.338 issued rwts: total=437,474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.338 job1: (groupid=0, jobs=1): err= 0: pid=85140: Wed Jul 24 05:14:57 2024 00:25:43.338 read: IOPS=77, BW=9963KiB/s (10.2MB/s)(52.4MiB/5383msec) 00:25:43.338 slat (usec): min=9, max=974, avg=46.19, stdev=91.22 00:25:43.338 clat (msec): min=23, max=413, avg=57.43, stdev=41.92 00:25:43.338 lat (msec): min=23, max=413, avg=57.48, stdev=41.92 00:25:43.338 clat percentiles (msec): 00:25:43.338 | 1.00th=[ 42], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.338 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:25:43.338 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 90], 95.00th=[ 118], 00:25:43.338 | 99.00th=[ 194], 99.50th=[ 401], 99.90th=[ 414], 99.95th=[ 414], 00:25:43.338 | 99.99th=[ 414] 00:25:43.338 bw ( KiB/s): min= 6656, max=16929, per=3.18%, avg=10625.20, stdev=2895.16, samples=10 00:25:43.338 iops : min= 52, max= 132, avg=82.90, stdev=22.57, samples=10 00:25:43.338 write: IOPS=87, BW=10.9MiB/s (11.5MB/s)(58.9MiB/5383msec); 0 zone resets 00:25:43.338 slat (usec): min=11, max=684, avg=46.67, stdev=60.44 00:25:43.338 clat (msec): min=182, max=1084, avg=679.24, stdev=108.04 00:25:43.338 lat (msec): min=182, max=1084, avg=679.28, stdev=108.04 00:25:43.338 clat percentiles (msec): 00:25:43.338 | 1.00th=[ 284], 5.00th=[ 460], 10.00th=[ 592], 20.00th=[ 659], 00:25:43.338 | 30.00th=[ 676], 40.00th=[ 684], 50.00th=[ 693], 60.00th=[ 701], 00:25:43.338 | 70.00th=[ 709], 80.00th=[ 709], 90.00th=[ 743], 95.00th=[ 835], 00:25:43.338 | 99.00th=[ 1028], 99.50th=[ 1053], 99.90th=[ 1083], 99.95th=[ 1083], 00:25:43.338 | 99.99th=[ 1083] 00:25:43.338 bw ( KiB/s): min= 5386, max=11520, per=3.14%, avg=10546.00, stdev=1829.98, samples=10 00:25:43.338 iops : min= 42, max= 90, avg=82.30, stdev=14.31, samples=10 00:25:43.338 lat (msec) : 50=39.89%, 100=2.70%, 250=4.49%, 500=3.60%, 750=45.73% 00:25:43.338 lat (msec) : 1000=2.92%, 2000=0.67% 00:25:43.338 cpu : usr=0.17%, sys=0.71%, ctx=565, majf=0, minf=1 00:25:43.338 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:25:43.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.338 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.338 issued rwts: total=419,471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.338 job2: (groupid=0, jobs=1): err= 0: pid=85142: Wed Jul 24 05:14:57 2024 00:25:43.338 read: IOPS=89, BW=11.2MiB/s (11.7MB/s)(60.2MiB/5382msec) 00:25:43.338 slat (usec): min=9, max=224, avg=38.62, stdev=21.01 00:25:43.338 clat (msec): min=31, max=422, avg=55.65, stdev=39.16 00:25:43.338 lat (msec): min=31, max=422, avg=55.69, stdev=39.16 00:25:43.338 clat percentiles (msec): 00:25:43.338 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.338 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.338 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 63], 95.00th=[ 140], 00:25:43.338 | 99.00th=[ 224], 99.50th=[ 393], 99.90th=[ 422], 99.95th=[ 422], 00:25:43.338 | 99.99th=[ 422] 00:25:43.338 bw ( KiB/s): min= 9216, max=15872, per=3.67%, avg=12259.70, stdev=2244.84, samples=10 00:25:43.338 iops : min= 72, max= 124, avg=95.70, stdev=17.50, samples=10 00:25:43.338 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.0MiB/5382msec); 0 zone resets 00:25:43.338 slat (usec): min=15, max=151, avg=42.72, stdev=19.48 00:25:43.338 clat (msec): min=181, max=1058, avg=671.87, stdev=104.83 00:25:43.338 lat (msec): min=181, max=1058, avg=671.92, stdev=104.83 00:25:43.338 clat percentiles (msec): 00:25:43.338 | 1.00th=[ 275], 5.00th=[ 447], 10.00th=[ 600], 20.00th=[ 651], 00:25:43.338 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 693], 00:25:43.338 | 70.00th=[ 701], 80.00th=[ 709], 90.00th=[ 718], 95.00th=[ 793], 00:25:43.338 | 99.00th=[ 1028], 99.50th=[ 1045], 99.90th=[ 1062], 99.95th=[ 1062], 00:25:43.338 | 99.99th=[ 1062] 00:25:43.338 bw ( KiB/s): min= 5376, max=11520, per=3.14%, avg=10545.00, stdev=1836.74, samples=10 00:25:43.338 iops : min= 42, max= 90, avg=82.30, stdev=14.33, samples=10 00:25:43.338 lat (msec) : 50=45.18%, 100=1.89%, 250=3.46%, 500=3.14%, 750=43.29% 00:25:43.338 lat (msec) : 1000=2.31%, 2000=0.73% 00:25:43.338 cpu : usr=0.28%, sys=0.59%, ctx=497, majf=0, minf=1 00:25:43.338 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:25:43.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.338 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.338 issued rwts: total=482,472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.338 job3: (groupid=0, jobs=1): err= 0: pid=85146: Wed Jul 24 05:14:57 2024 00:25:43.338 read: IOPS=82, BW=10.3MiB/s (10.8MB/s)(55.6MiB/5385msec) 00:25:43.338 slat (usec): min=10, max=352, avg=35.75, stdev=22.97 00:25:43.338 clat (msec): min=42, max=408, avg=56.02, stdev=34.65 00:25:43.338 lat (msec): min=42, max=408, avg=56.06, stdev=34.65 00:25:43.338 clat percentiles (msec): 00:25:43.338 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:25:43.338 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.338 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 71], 95.00th=[ 124], 00:25:43.338 | 99.00th=[ 171], 99.50th=[ 171], 99.90th=[ 409], 99.95th=[ 409], 00:25:43.338 | 99.99th=[ 409] 00:25:43.338 bw ( KiB/s): min= 7168, max=18212, per=3.39%, avg=11341.90, stdev=3223.43, samples=10 00:25:43.338 iops : min= 56, max= 142, avg=88.50, stdev=25.09, samples=10 00:25:43.338 write: IOPS=88, BW=11.0MiB/s (11.6MB/s)(59.4MiB/5385msec); 0 zone resets 00:25:43.338 slat (nsec): min=15202, max=99241, avg=41866.73, stdev=15409.53 00:25:43.338 clat (msec): min=179, max=1021, avg=671.62, stdev=103.57 00:25:43.338 lat (msec): min=179, max=1021, avg=671.67, stdev=103.57 00:25:43.338 clat percentiles (msec): 00:25:43.338 | 1.00th=[ 275], 5.00th=[ 451], 10.00th=[ 592], 20.00th=[ 659], 00:25:43.338 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 693], 00:25:43.338 | 70.00th=[ 701], 80.00th=[ 709], 90.00th=[ 726], 95.00th=[ 785], 00:25:43.338 | 99.00th=[ 986], 99.50th=[ 1003], 99.90th=[ 1020], 99.95th=[ 1020], 00:25:43.338 | 99.99th=[ 1020] 00:25:43.338 bw ( KiB/s): min= 5386, max=11520, per=3.15%, avg=10571.60, stdev=1842.89, samples=10 00:25:43.338 iops : min= 42, max= 90, avg=82.50, stdev=14.42, samples=10 00:25:43.338 lat (msec) : 50=41.09%, 100=3.80%, 250=3.70%, 500=3.26%, 750=45.33% 00:25:43.338 lat (msec) : 1000=2.61%, 2000=0.22% 00:25:43.338 cpu : usr=0.32%, sys=0.65%, ctx=535, majf=0, minf=1 00:25:43.338 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:25:43.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.338 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.338 issued rwts: total=445,475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.338 job4: (groupid=0, jobs=1): err= 0: pid=85195: Wed Jul 24 05:14:57 2024 00:25:43.338 read: IOPS=85, BW=10.7MiB/s (11.2MB/s)(57.5MiB/5396msec) 00:25:43.338 slat (usec): min=9, max=1644, avg=54.90, stdev=149.70 00:25:43.338 clat (msec): min=23, max=191, avg=54.31, stdev=23.58 00:25:43.338 lat (msec): min=23, max=191, avg=54.36, stdev=23.57 00:25:43.338 clat percentiles (msec): 00:25:43.338 | 1.00th=[ 33], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.338 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.338 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 86], 95.00th=[ 108], 00:25:43.338 | 99.00th=[ 146], 99.50th=[ 178], 99.90th=[ 192], 99.95th=[ 192], 00:25:43.338 | 99.99th=[ 192] 00:25:43.338 bw ( KiB/s): min= 8704, max=22316, per=3.52%, avg=11777.90, stdev=3976.88, samples=10 00:25:43.338 iops : min= 68, max= 174, avg=91.90, stdev=30.96, samples=10 00:25:43.338 write: IOPS=88, BW=11.0MiB/s (11.5MB/s)(59.4MiB/5396msec); 0 zone resets 00:25:43.338 slat (usec): min=11, max=780, avg=47.86, stdev=78.29 00:25:43.338 clat (msec): min=177, max=1081, avg=673.43, stdev=107.82 00:25:43.338 lat (msec): min=177, max=1081, avg=673.48, stdev=107.83 00:25:43.338 clat percentiles (msec): 00:25:43.338 | 1.00th=[ 284], 5.00th=[ 456], 10.00th=[ 567], 20.00th=[ 651], 00:25:43.338 | 30.00th=[ 676], 40.00th=[ 676], 50.00th=[ 693], 60.00th=[ 693], 00:25:43.338 | 70.00th=[ 701], 80.00th=[ 709], 90.00th=[ 726], 95.00th=[ 835], 00:25:43.338 | 99.00th=[ 1028], 99.50th=[ 1070], 99.90th=[ 1083], 99.95th=[ 1083], 00:25:43.338 | 99.99th=[ 1083] 00:25:43.338 bw ( KiB/s): min= 5130, max=11520, per=3.14%, avg=10546.00, stdev=1915.38, samples=10 00:25:43.338 iops : min= 40, max= 90, avg=82.30, stdev=14.98, samples=10 00:25:43.338 lat (msec) : 50=40.53%, 100=5.24%, 250=3.85%, 500=2.99%, 750=44.39% 00:25:43.338 lat (msec) : 1000=2.35%, 2000=0.64% 00:25:43.339 cpu : usr=0.32%, sys=0.39%, ctx=679, majf=0, minf=1 00:25:43.339 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.3% 00:25:43.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.339 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.339 issued rwts: total=460,475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.339 job5: (groupid=0, jobs=1): err= 0: pid=85196: Wed Jul 24 05:14:57 2024 00:25:43.339 read: IOPS=90, BW=11.3MiB/s (11.9MB/s)(61.0MiB/5384msec) 00:25:43.339 slat (usec): min=9, max=421, avg=36.83, stdev=33.72 00:25:43.339 clat (msec): min=30, max=407, avg=54.57, stdev=33.05 00:25:43.339 lat (msec): min=30, max=407, avg=54.61, stdev=33.04 00:25:43.339 clat percentiles (msec): 00:25:43.339 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.339 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.339 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 68], 95.00th=[ 102], 00:25:43.339 | 99.00th=[ 171], 99.50th=[ 207], 99.90th=[ 409], 99.95th=[ 409], 00:25:43.339 | 99.99th=[ 409] 00:25:43.339 bw ( KiB/s): min= 8704, max=18176, per=3.72%, avg=12433.40, stdev=3245.64, samples=10 00:25:43.339 iops : min= 68, max= 142, avg=96.90, stdev=25.29, samples=10 00:25:43.339 write: IOPS=88, BW=11.0MiB/s (11.5MB/s)(59.2MiB/5384msec); 0 zone resets 00:25:43.339 slat (usec): min=13, max=323, avg=41.37, stdev=25.31 00:25:43.339 clat (msec): min=180, max=1031, avg=669.68, stdev=105.43 00:25:43.339 lat (msec): min=180, max=1031, avg=669.72, stdev=105.43 00:25:43.339 clat percentiles (msec): 00:25:43.339 | 1.00th=[ 275], 5.00th=[ 435], 10.00th=[ 575], 20.00th=[ 659], 00:25:43.339 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 684], 00:25:43.339 | 70.00th=[ 693], 80.00th=[ 701], 90.00th=[ 718], 95.00th=[ 818], 00:25:43.339 | 99.00th=[ 995], 99.50th=[ 1011], 99.90th=[ 1036], 99.95th=[ 1036], 00:25:43.339 | 99.99th=[ 1036] 00:25:43.339 bw ( KiB/s): min= 5365, max=11520, per=3.15%, avg=10567.20, stdev=1848.51, samples=10 00:25:43.339 iops : min= 41, max= 90, avg=82.30, stdev=14.69, samples=10 00:25:43.339 lat (msec) : 50=44.28%, 100=3.74%, 250=2.91%, 500=3.22%, 750=42.93% 00:25:43.339 lat (msec) : 1000=2.49%, 2000=0.42% 00:25:43.339 cpu : usr=0.32%, sys=0.67%, ctx=515, majf=0, minf=1 00:25:43.339 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.5% 00:25:43.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.339 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.339 issued rwts: total=488,474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.339 job6: (groupid=0, jobs=1): err= 0: pid=85198: Wed Jul 24 05:14:57 2024 00:25:43.339 read: IOPS=86, BW=10.8MiB/s (11.3MB/s)(58.2MiB/5399msec) 00:25:43.339 slat (nsec): min=10359, max=82114, avg=25811.82, stdev=8645.23 00:25:43.339 clat (msec): min=12, max=422, avg=53.98, stdev=43.74 00:25:43.339 lat (msec): min=12, max=422, avg=54.00, stdev=43.74 00:25:43.339 clat percentiles (msec): 00:25:43.339 | 1.00th=[ 13], 5.00th=[ 20], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.339 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.339 | 70.00th=[ 47], 80.00th=[ 48], 90.00th=[ 50], 95.00th=[ 101], 00:25:43.339 | 99.00th=[ 201], 99.50th=[ 414], 99.90th=[ 422], 99.95th=[ 422], 00:25:43.339 | 99.99th=[ 422] 00:25:43.339 bw ( KiB/s): min= 7936, max=17152, per=3.53%, avg=11801.60, stdev=2823.62, samples=10 00:25:43.339 iops : min= 62, max= 134, avg=92.20, stdev=22.06, samples=10 00:25:43.339 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.1MiB/5399msec); 0 zone resets 00:25:43.339 slat (usec): min=14, max=2082, avg=35.76, stdev=94.80 00:25:43.339 clat (msec): min=15, max=1098, avg=675.99, stdev=117.51 00:25:43.339 lat (msec): min=15, max=1098, avg=676.02, stdev=117.49 00:25:43.339 clat percentiles (msec): 00:25:43.339 | 1.00th=[ 199], 5.00th=[ 468], 10.00th=[ 600], 20.00th=[ 659], 00:25:43.339 | 30.00th=[ 667], 40.00th=[ 684], 50.00th=[ 693], 60.00th=[ 701], 00:25:43.339 | 70.00th=[ 709], 80.00th=[ 709], 90.00th=[ 726], 95.00th=[ 818], 00:25:43.339 | 99.00th=[ 1028], 99.50th=[ 1070], 99.90th=[ 1099], 99.95th=[ 1099], 00:25:43.339 | 99.99th=[ 1099] 00:25:43.339 bw ( KiB/s): min= 5632, max=11520, per=3.16%, avg=10598.40, stdev=1757.95, samples=10 00:25:43.339 iops : min= 44, max= 90, avg=82.80, stdev=13.73, samples=10 00:25:43.339 lat (msec) : 20=2.77%, 50=42.39%, 100=2.02%, 250=2.88%, 500=3.09% 00:25:43.339 lat (msec) : 750=43.02%, 1000=3.19%, 2000=0.64% 00:25:43.339 cpu : usr=0.20%, sys=0.48%, ctx=496, majf=0, minf=1 00:25:43.339 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.3% 00:25:43.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.339 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.339 issued rwts: total=466,473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.339 job7: (groupid=0, jobs=1): err= 0: pid=85202: Wed Jul 24 05:14:57 2024 00:25:43.339 read: IOPS=78, BW=9994KiB/s (10.2MB/s)(52.6MiB/5392msec) 00:25:43.339 slat (usec): min=9, max=872, avg=37.64, stdev=53.33 00:25:43.339 clat (msec): min=30, max=421, avg=56.25, stdev=35.30 00:25:43.339 lat (msec): min=30, max=421, avg=56.29, stdev=35.29 00:25:43.339 clat percentiles (msec): 00:25:43.339 | 1.00th=[ 42], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:25:43.339 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.339 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 84], 95.00th=[ 120], 00:25:43.339 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 422], 99.95th=[ 422], 00:25:43.339 | 99.99th=[ 422] 00:25:43.339 bw ( KiB/s): min= 7936, max=17186, per=3.21%, avg=10726.90, stdev=2990.81, samples=10 00:25:43.339 iops : min= 62, max= 134, avg=83.70, stdev=23.20, samples=10 00:25:43.339 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.2MiB/5392msec); 0 zone resets 00:25:43.339 slat (usec): min=14, max=235, avg=39.60, stdev=17.92 00:25:43.339 clat (msec): min=185, max=1080, avg=677.07, stdev=107.24 00:25:43.339 lat (msec): min=185, max=1080, avg=677.11, stdev=107.24 00:25:43.339 clat percentiles (msec): 00:25:43.339 | 1.00th=[ 288], 5.00th=[ 443], 10.00th=[ 600], 20.00th=[ 659], 00:25:43.339 | 30.00th=[ 676], 40.00th=[ 684], 50.00th=[ 693], 60.00th=[ 701], 00:25:43.339 | 70.00th=[ 709], 80.00th=[ 709], 90.00th=[ 726], 95.00th=[ 818], 00:25:43.339 | 99.00th=[ 1028], 99.50th=[ 1070], 99.90th=[ 1083], 99.95th=[ 1083], 00:25:43.339 | 99.99th=[ 1083] 00:25:43.339 bw ( KiB/s): min= 5386, max=11520, per=3.15%, avg=10571.70, stdev=1847.17, samples=10 00:25:43.339 iops : min= 42, max= 90, avg=82.50, stdev=14.46, samples=10 00:25:43.339 lat (msec) : 50=39.66%, 100=3.13%, 250=4.47%, 500=3.35%, 750=45.92% 00:25:43.339 lat (msec) : 1000=2.57%, 2000=0.89% 00:25:43.339 cpu : usr=0.28%, sys=0.63%, ctx=514, majf=0, minf=1 00:25:43.339 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:25:43.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.339 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.339 issued rwts: total=421,474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.339 job8: (groupid=0, jobs=1): err= 0: pid=85208: Wed Jul 24 05:14:57 2024 00:25:43.339 read: IOPS=78, BW=9.85MiB/s (10.3MB/s)(53.2MiB/5405msec) 00:25:43.339 slat (usec): min=9, max=364, avg=34.12, stdev=22.04 00:25:43.339 clat (usec): min=1956, max=428912, avg=53480.12, stdev=42122.76 00:25:43.339 lat (usec): min=1967, max=428935, avg=53514.25, stdev=42121.48 00:25:43.339 clat percentiles (msec): 00:25:43.339 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.339 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:25:43.339 | 70.00th=[ 47], 80.00th=[ 48], 90.00th=[ 73], 95.00th=[ 108], 00:25:43.339 | 99.00th=[ 174], 99.50th=[ 418], 99.90th=[ 430], 99.95th=[ 430], 00:25:43.339 | 99.99th=[ 430] 00:25:43.339 bw ( KiB/s): min= 6656, max=18176, per=3.23%, avg=10803.20, stdev=2933.28, samples=10 00:25:43.339 iops : min= 52, max= 142, avg=84.40, stdev=22.92, samples=10 00:25:43.339 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.2MiB/5405msec); 0 zone resets 00:25:43.339 slat (usec): min=13, max=399, avg=41.53, stdev=30.90 00:25:43.339 clat (msec): min=15, max=1055, avg=680.65, stdev=118.50 00:25:43.339 lat (msec): min=15, max=1055, avg=680.69, stdev=118.51 00:25:43.339 clat percentiles (msec): 00:25:43.339 | 1.00th=[ 186], 5.00th=[ 451], 10.00th=[ 592], 20.00th=[ 667], 00:25:43.339 | 30.00th=[ 676], 40.00th=[ 684], 50.00th=[ 693], 60.00th=[ 701], 00:25:43.339 | 70.00th=[ 709], 80.00th=[ 726], 90.00th=[ 743], 95.00th=[ 793], 00:25:43.339 | 99.00th=[ 1020], 99.50th=[ 1036], 99.90th=[ 1053], 99.95th=[ 1053], 00:25:43.339 | 99.99th=[ 1053] 00:25:43.339 bw ( KiB/s): min= 5632, max=11520, per=3.16%, avg=10598.40, stdev=1757.95, samples=10 00:25:43.339 iops : min= 44, max= 90, avg=82.80, stdev=13.73, samples=10 00:25:43.339 lat (msec) : 2=0.22%, 4=0.44%, 20=2.22%, 50=39.33%, 100=2.22% 00:25:43.339 lat (msec) : 250=3.33%, 500=3.11%, 750=44.33%, 1000=4.11%, 2000=0.67% 00:25:43.339 cpu : usr=0.30%, sys=0.61%, ctx=526, majf=0, minf=1 00:25:43.339 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:25:43.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.339 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.339 issued rwts: total=426,474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.339 job9: (groupid=0, jobs=1): err= 0: pid=85281: Wed Jul 24 05:14:57 2024 00:25:43.339 read: IOPS=91, BW=11.4MiB/s (12.0MB/s)(62.0MiB/5415msec) 00:25:43.339 slat (usec): min=9, max=1284, avg=45.66, stdev=82.23 00:25:43.339 clat (msec): min=5, max=207, avg=53.77, stdev=30.13 00:25:43.339 lat (msec): min=5, max=207, avg=53.82, stdev=30.14 00:25:43.339 clat percentiles (msec): 00:25:43.339 | 1.00th=[ 7], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 45], 00:25:43.339 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.339 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 72], 95.00th=[ 107], 00:25:43.339 | 99.00th=[ 197], 99.50th=[ 197], 99.90th=[ 209], 99.95th=[ 209], 00:25:43.339 | 99.99th=[ 209] 00:25:43.339 bw ( KiB/s): min= 9216, max=28614, per=3.79%, avg=12666.20, stdev=5845.32, samples=10 00:25:43.339 iops : min= 72, max= 223, avg=98.90, stdev=45.50, samples=10 00:25:43.339 write: IOPS=87, BW=10.9MiB/s (11.5MB/s)(59.2MiB/5415msec); 0 zone resets 00:25:43.340 slat (usec): min=12, max=11492, avg=82.80, stdev=540.12 00:25:43.340 clat (msec): min=98, max=1101, avg=672.27, stdev=111.99 00:25:43.340 lat (msec): min=109, max=1101, avg=672.35, stdev=111.88 00:25:43.340 clat percentiles (msec): 00:25:43.340 | 1.00th=[ 239], 5.00th=[ 460], 10.00th=[ 567], 20.00th=[ 634], 00:25:43.340 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 693], 00:25:43.340 | 70.00th=[ 701], 80.00th=[ 709], 90.00th=[ 726], 95.00th=[ 827], 00:25:43.340 | 99.00th=[ 1045], 99.50th=[ 1083], 99.90th=[ 1099], 99.95th=[ 1099], 00:25:43.340 | 99.99th=[ 1099] 00:25:43.340 bw ( KiB/s): min= 5109, max=11520, per=3.13%, avg=10520.50, stdev=1917.09, samples=10 00:25:43.340 iops : min= 39, max= 90, avg=82.10, stdev=15.26, samples=10 00:25:43.340 lat (msec) : 10=0.93%, 20=0.31%, 50=41.34%, 100=5.67%, 250=3.40% 00:25:43.340 lat (msec) : 500=2.58%, 750=42.58%, 1000=2.37%, 2000=0.82% 00:25:43.340 cpu : usr=0.13%, sys=0.54%, ctx=828, majf=0, minf=1 00:25:43.340 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5% 00:25:43.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.340 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.340 issued rwts: total=496,474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.340 job10: (groupid=0, jobs=1): err= 0: pid=85312: Wed Jul 24 05:14:57 2024 00:25:43.340 read: IOPS=94, BW=11.8MiB/s (12.4MB/s)(63.8MiB/5384msec) 00:25:43.340 slat (usec): min=9, max=1263, avg=46.62, stdev=79.98 00:25:43.340 clat (msec): min=27, max=415, avg=56.42, stdev=39.70 00:25:43.340 lat (msec): min=27, max=415, avg=56.47, stdev=39.69 00:25:43.340 clat percentiles (msec): 00:25:43.340 | 1.00th=[ 38], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:25:43.340 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:25:43.340 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 69], 95.00th=[ 120], 00:25:43.340 | 99.00th=[ 171], 99.50th=[ 405], 99.90th=[ 418], 99.95th=[ 418], 00:25:43.340 | 99.99th=[ 418] 00:25:43.340 bw ( KiB/s): min= 8448, max=19200, per=3.87%, avg=12951.50, stdev=3555.42, samples=10 00:25:43.340 iops : min= 66, max= 150, avg=101.10, stdev=27.84, samples=10 00:25:43.340 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.0MiB/5384msec); 0 zone resets 00:25:43.340 slat (usec): min=15, max=1447, avg=57.82, stdev=106.98 00:25:43.340 clat (msec): min=185, max=1040, avg=667.84, stdev=103.37 00:25:43.340 lat (msec): min=185, max=1040, avg=667.90, stdev=103.38 00:25:43.340 clat percentiles (msec): 00:25:43.340 | 1.00th=[ 284], 5.00th=[ 460], 10.00th=[ 567], 20.00th=[ 651], 00:25:43.340 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 693], 00:25:43.340 | 70.00th=[ 701], 80.00th=[ 709], 90.00th=[ 718], 95.00th=[ 776], 00:25:43.340 | 99.00th=[ 986], 99.50th=[ 1003], 99.90th=[ 1045], 99.95th=[ 1045], 00:25:43.340 | 99.99th=[ 1045] 00:25:43.340 bw ( KiB/s): min= 5376, max=11520, per=3.15%, avg=10570.60, stdev=1846.02, samples=10 00:25:43.340 iops : min= 42, max= 90, avg=82.50, stdev=14.42, samples=10 00:25:43.340 lat (msec) : 50=45.11%, 100=1.93%, 250=4.89%, 500=3.26%, 750=42.26% 00:25:43.340 lat (msec) : 1000=2.24%, 2000=0.31% 00:25:43.340 cpu : usr=0.20%, sys=0.69%, ctx=546, majf=0, minf=1 00:25:43.340 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:25:43.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.340 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.340 issued rwts: total=510,472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.340 job11: (groupid=0, jobs=1): err= 0: pid=85350: Wed Jul 24 05:14:57 2024 00:25:43.340 read: IOPS=85, BW=10.7MiB/s (11.2MB/s)(57.8MiB/5392msec) 00:25:43.340 slat (usec): min=9, max=401, avg=30.80, stdev=33.47 00:25:43.340 clat (msec): min=28, max=412, avg=57.30, stdev=35.43 00:25:43.340 lat (msec): min=28, max=412, avg=57.33, stdev=35.42 00:25:43.340 clat percentiles (msec): 00:25:43.340 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.340 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.340 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 105], 95.00th=[ 127], 00:25:43.340 | 99.00th=[ 174], 99.50th=[ 174], 99.90th=[ 414], 99.95th=[ 414], 00:25:43.340 | 99.99th=[ 414] 00:25:43.340 bw ( KiB/s): min= 6400, max=20992, per=3.52%, avg=11773.50, stdev=4095.66, samples=10 00:25:43.340 iops : min= 50, max= 164, avg=91.90, stdev=31.99, samples=10 00:25:43.340 write: IOPS=88, BW=11.0MiB/s (11.5MB/s)(59.4MiB/5392msec); 0 zone resets 00:25:43.340 slat (usec): min=16, max=1297, avg=45.47, stdev=93.69 00:25:43.340 clat (msec): min=182, max=1036, avg=669.72, stdev=105.01 00:25:43.340 lat (msec): min=183, max=1036, avg=669.77, stdev=105.01 00:25:43.340 clat percentiles (msec): 00:25:43.340 | 1.00th=[ 284], 5.00th=[ 468], 10.00th=[ 550], 20.00th=[ 651], 00:25:43.340 | 30.00th=[ 667], 40.00th=[ 684], 50.00th=[ 684], 60.00th=[ 693], 00:25:43.340 | 70.00th=[ 701], 80.00th=[ 709], 90.00th=[ 726], 95.00th=[ 751], 00:25:43.340 | 99.00th=[ 995], 99.50th=[ 1011], 99.90th=[ 1036], 99.95th=[ 1036], 00:25:43.340 | 99.99th=[ 1036] 00:25:43.340 bw ( KiB/s): min= 5376, max=11520, per=3.15%, avg=10570.60, stdev=1846.02, samples=10 00:25:43.340 iops : min= 42, max= 90, avg=82.50, stdev=14.42, samples=10 00:25:43.340 lat (msec) : 50=41.09%, 100=2.67%, 250=5.76%, 500=3.20%, 750=43.76% 00:25:43.340 lat (msec) : 1000=3.09%, 2000=0.43% 00:25:43.340 cpu : usr=0.17%, sys=0.52%, ctx=554, majf=0, minf=1 00:25:43.340 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.3% 00:25:43.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.340 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.340 issued rwts: total=462,475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.340 job12: (groupid=0, jobs=1): err= 0: pid=85354: Wed Jul 24 05:14:57 2024 00:25:43.340 read: IOPS=96, BW=12.0MiB/s (12.6MB/s)(64.9MiB/5396msec) 00:25:43.340 slat (usec): min=10, max=585, avg=30.57, stdev=40.93 00:25:43.340 clat (msec): min=24, max=422, avg=54.88, stdev=30.62 00:25:43.340 lat (msec): min=24, max=422, avg=54.91, stdev=30.62 00:25:43.340 clat percentiles (msec): 00:25:43.340 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:25:43.340 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.340 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 84], 95.00th=[ 120], 00:25:43.340 | 99.00th=[ 184], 99.50th=[ 209], 99.90th=[ 422], 99.95th=[ 422], 00:25:43.340 | 99.99th=[ 422] 00:25:43.340 bw ( KiB/s): min= 8960, max=21760, per=3.96%, avg=13258.10, stdev=3658.18, samples=10 00:25:43.340 iops : min= 70, max= 170, avg=103.50, stdev=28.59, samples=10 00:25:43.340 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.2MiB/5396msec); 0 zone resets 00:25:43.340 slat (usec): min=15, max=585, avg=39.23, stdev=52.78 00:25:43.340 clat (msec): min=191, max=1106, avg=667.36, stdev=106.60 00:25:43.340 lat (msec): min=191, max=1106, avg=667.40, stdev=106.60 00:25:43.340 clat percentiles (msec): 00:25:43.340 | 1.00th=[ 292], 5.00th=[ 456], 10.00th=[ 575], 20.00th=[ 651], 00:25:43.340 | 30.00th=[ 659], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 684], 00:25:43.340 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 718], 95.00th=[ 793], 00:25:43.340 | 99.00th=[ 1028], 99.50th=[ 1070], 99.90th=[ 1099], 99.95th=[ 1099], 00:25:43.340 | 99.99th=[ 1099] 00:25:43.340 bw ( KiB/s): min= 5120, max=11520, per=3.14%, avg=10545.00, stdev=1918.52, samples=10 00:25:43.340 iops : min= 40, max= 90, avg=82.30, stdev=14.98, samples=10 00:25:43.340 lat (msec) : 50=44.81%, 100=3.63%, 250=4.13%, 500=3.02%, 750=41.39% 00:25:43.340 lat (msec) : 1000=2.11%, 2000=0.91% 00:25:43.340 cpu : usr=0.15%, sys=0.52%, ctx=602, majf=0, minf=1 00:25:43.340 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:25:43.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.340 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.340 issued rwts: total=519,474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.340 job13: (groupid=0, jobs=1): err= 0: pid=85355: Wed Jul 24 05:14:57 2024 00:25:43.340 read: IOPS=89, BW=11.2MiB/s (11.8MB/s)(60.4MiB/5382msec) 00:25:43.340 slat (usec): min=9, max=598, avg=33.73, stdev=32.51 00:25:43.340 clat (msec): min=30, max=411, avg=57.12, stdev=39.43 00:25:43.340 lat (msec): min=30, max=411, avg=57.15, stdev=39.42 00:25:43.340 clat percentiles (msec): 00:25:43.340 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.340 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.340 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 84], 95.00th=[ 124], 00:25:43.340 | 99.00th=[ 188], 99.50th=[ 388], 99.90th=[ 414], 99.95th=[ 414], 00:25:43.340 | 99.99th=[ 414] 00:25:43.340 bw ( KiB/s): min= 7680, max=21290, per=3.67%, avg=12263.90, stdev=3691.89, samples=10 00:25:43.340 iops : min= 60, max= 166, avg=95.70, stdev=28.74, samples=10 00:25:43.340 write: IOPS=87, BW=10.9MiB/s (11.5MB/s)(58.9MiB/5382msec); 0 zone resets 00:25:43.340 slat (usec): min=13, max=254, avg=39.32, stdev=19.80 00:25:43.340 clat (msec): min=184, max=1095, avg=671.56, stdev=105.31 00:25:43.340 lat (msec): min=184, max=1096, avg=671.60, stdev=105.31 00:25:43.340 clat percentiles (msec): 00:25:43.340 | 1.00th=[ 284], 5.00th=[ 460], 10.00th=[ 567], 20.00th=[ 651], 00:25:43.340 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 693], 00:25:43.340 | 70.00th=[ 701], 80.00th=[ 709], 90.00th=[ 726], 95.00th=[ 793], 00:25:43.340 | 99.00th=[ 1011], 99.50th=[ 1045], 99.90th=[ 1099], 99.95th=[ 1099], 00:25:43.340 | 99.99th=[ 1099] 00:25:43.340 bw ( KiB/s): min= 5386, max=11520, per=3.14%, avg=10546.10, stdev=1834.30, samples=10 00:25:43.340 iops : min= 42, max= 90, avg=82.30, stdev=14.36, samples=10 00:25:43.340 lat (msec) : 50=42.56%, 100=3.88%, 250=4.19%, 500=3.35%, 750=42.98% 00:25:43.340 lat (msec) : 1000=2.52%, 2000=0.52% 00:25:43.340 cpu : usr=0.24%, sys=0.65%, ctx=534, majf=0, minf=1 00:25:43.340 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:25:43.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.340 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.340 issued rwts: total=483,471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.340 job14: (groupid=0, jobs=1): err= 0: pid=85356: Wed Jul 24 05:14:57 2024 00:25:43.340 read: IOPS=84, BW=10.6MiB/s (11.1MB/s)(56.9MiB/5374msec) 00:25:43.340 slat (nsec): min=7282, max=80062, avg=27124.91, stdev=10629.74 00:25:43.341 clat (msec): min=32, max=400, avg=57.89, stdev=44.35 00:25:43.341 lat (msec): min=32, max=400, avg=57.92, stdev=44.35 00:25:43.341 clat percentiles (msec): 00:25:43.341 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:25:43.341 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.341 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 73], 95.00th=[ 163], 00:25:43.341 | 99.00th=[ 380], 99.50th=[ 388], 99.90th=[ 401], 99.95th=[ 401], 00:25:43.341 | 99.99th=[ 401] 00:25:43.341 bw ( KiB/s): min= 9472, max=14592, per=3.44%, avg=11517.60, stdev=1689.13, samples=10 00:25:43.341 iops : min= 74, max= 114, avg=89.90, stdev=13.19, samples=10 00:25:43.341 write: IOPS=87, BW=10.9MiB/s (11.5MB/s)(58.8MiB/5374msec); 0 zone resets 00:25:43.341 slat (nsec): min=13468, max=74095, avg=31210.65, stdev=10378.97 00:25:43.341 clat (msec): min=186, max=1082, avg=674.78, stdev=102.29 00:25:43.341 lat (msec): min=186, max=1082, avg=674.81, stdev=102.29 00:25:43.341 clat percentiles (msec): 00:25:43.341 | 1.00th=[ 288], 5.00th=[ 468], 10.00th=[ 592], 20.00th=[ 659], 00:25:43.341 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 693], 00:25:43.341 | 70.00th=[ 701], 80.00th=[ 709], 90.00th=[ 726], 95.00th=[ 793], 00:25:43.341 | 99.00th=[ 1011], 99.50th=[ 1045], 99.90th=[ 1083], 99.95th=[ 1083], 00:25:43.341 | 99.99th=[ 1083] 00:25:43.341 bw ( KiB/s): min= 5120, max=11520, per=3.14%, avg=10545.00, stdev=1922.31, samples=10 00:25:43.341 iops : min= 40, max= 90, avg=82.30, stdev=15.01, samples=10 00:25:43.341 lat (msec) : 50=42.81%, 100=3.03%, 250=3.14%, 500=3.46%, 750=44.32% 00:25:43.341 lat (msec) : 1000=2.70%, 2000=0.54% 00:25:43.341 cpu : usr=0.11%, sys=0.50%, ctx=497, majf=0, minf=1 00:25:43.341 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:25:43.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.341 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.341 issued rwts: total=455,470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.341 job15: (groupid=0, jobs=1): err= 0: pid=85357: Wed Jul 24 05:14:57 2024 00:25:43.341 read: IOPS=83, BW=10.4MiB/s (10.9MB/s)(56.2MiB/5390msec) 00:25:43.341 slat (usec): min=9, max=822, avg=34.79, stdev=53.35 00:25:43.341 clat (msec): min=36, max=415, avg=56.03, stdev=31.91 00:25:43.341 lat (msec): min=36, max=415, avg=56.07, stdev=31.91 00:25:43.341 clat percentiles (msec): 00:25:43.341 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:25:43.341 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.341 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 87], 95.00th=[ 133], 00:25:43.341 | 99.00th=[ 169], 99.50th=[ 171], 99.90th=[ 418], 99.95th=[ 418], 00:25:43.341 | 99.99th=[ 418] 00:25:43.341 bw ( KiB/s): min= 7680, max=18212, per=3.43%, avg=11470.20, stdev=3237.58, samples=10 00:25:43.341 iops : min= 60, max= 142, avg=89.50, stdev=25.25, samples=10 00:25:43.341 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.2MiB/5390msec); 0 zone resets 00:25:43.341 slat (usec): min=13, max=695, avg=41.44, stdev=46.67 00:25:43.341 clat (msec): min=186, max=1039, avg=673.54, stdev=106.05 00:25:43.341 lat (msec): min=186, max=1039, avg=673.58, stdev=106.06 00:25:43.341 clat percentiles (msec): 00:25:43.341 | 1.00th=[ 288], 5.00th=[ 443], 10.00th=[ 584], 20.00th=[ 651], 00:25:43.341 | 30.00th=[ 667], 40.00th=[ 684], 50.00th=[ 693], 60.00th=[ 701], 00:25:43.341 | 70.00th=[ 701], 80.00th=[ 718], 90.00th=[ 735], 95.00th=[ 802], 00:25:43.341 | 99.00th=[ 995], 99.50th=[ 1020], 99.90th=[ 1036], 99.95th=[ 1036], 00:25:43.341 | 99.99th=[ 1036] 00:25:43.341 bw ( KiB/s): min= 5386, max=11520, per=3.14%, avg=10546.00, stdev=1837.58, samples=10 00:25:43.341 iops : min= 42, max= 90, avg=82.30, stdev=14.36, samples=10 00:25:43.341 lat (msec) : 50=40.91%, 100=4.11%, 250=4.00%, 500=3.35%, 750=44.48% 00:25:43.341 lat (msec) : 1000=2.71%, 2000=0.43% 00:25:43.341 cpu : usr=0.20%, sys=0.54%, ctx=621, majf=0, minf=1 00:25:43.341 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:25:43.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.341 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.341 issued rwts: total=450,474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.341 job16: (groupid=0, jobs=1): err= 0: pid=85358: Wed Jul 24 05:14:57 2024 00:25:43.341 read: IOPS=92, BW=11.6MiB/s (12.1MB/s)(62.1MiB/5370msec) 00:25:43.341 slat (usec): min=8, max=333, avg=29.66, stdev=18.62 00:25:43.341 clat (msec): min=34, max=399, avg=58.96, stdev=42.42 00:25:43.341 lat (msec): min=34, max=399, avg=58.99, stdev=42.41 00:25:43.341 clat percentiles (msec): 00:25:43.341 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.341 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.341 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 101], 95.00th=[ 131], 00:25:43.341 | 99.00th=[ 388], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:25:43.341 | 99.99th=[ 401] 00:25:43.341 bw ( KiB/s): min= 8960, max=24832, per=3.77%, avg=12591.00, stdev=4714.47, samples=10 00:25:43.341 iops : min= 70, max= 194, avg=98.20, stdev=36.92, samples=10 00:25:43.341 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.0MiB/5370msec); 0 zone resets 00:25:43.341 slat (usec): min=11, max=269, avg=37.60, stdev=18.29 00:25:43.341 clat (msec): min=172, max=1023, avg=665.13, stdev=105.39 00:25:43.341 lat (msec): min=172, max=1023, avg=665.16, stdev=105.40 00:25:43.341 clat percentiles (msec): 00:25:43.341 | 1.00th=[ 271], 5.00th=[ 456], 10.00th=[ 542], 20.00th=[ 651], 00:25:43.341 | 30.00th=[ 667], 40.00th=[ 684], 50.00th=[ 684], 60.00th=[ 693], 00:25:43.341 | 70.00th=[ 693], 80.00th=[ 701], 90.00th=[ 718], 95.00th=[ 776], 00:25:43.341 | 99.00th=[ 1003], 99.50th=[ 1011], 99.90th=[ 1028], 99.95th=[ 1028], 00:25:43.341 | 99.99th=[ 1028] 00:25:43.341 bw ( KiB/s): min= 5376, max=11520, per=3.16%, avg=10593.80, stdev=1848.60, samples=10 00:25:43.341 iops : min= 42, max= 90, avg=82.60, stdev=14.37, samples=10 00:25:43.341 lat (msec) : 50=42.21%, 100=3.82%, 250=5.16%, 500=3.51%, 750=42.72% 00:25:43.341 lat (msec) : 1000=2.06%, 2000=0.52% 00:25:43.341 cpu : usr=0.24%, sys=0.65%, ctx=534, majf=0, minf=1 00:25:43.341 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.5% 00:25:43.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.341 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.341 issued rwts: total=497,472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.341 job17: (groupid=0, jobs=1): err= 0: pid=85359: Wed Jul 24 05:14:57 2024 00:25:43.341 read: IOPS=81, BW=10.1MiB/s (10.6MB/s)(54.6MiB/5385msec) 00:25:43.341 slat (usec): min=9, max=122, avg=35.14, stdev=18.16 00:25:43.341 clat (msec): min=38, max=388, avg=56.34, stdev=30.96 00:25:43.341 lat (msec): min=38, max=388, avg=56.38, stdev=30.95 00:25:43.341 clat percentiles (msec): 00:25:43.341 | 1.00th=[ 41], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.341 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.341 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 101], 95.00th=[ 134], 00:25:43.341 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 388], 99.95th=[ 388], 00:25:43.341 | 99.99th=[ 388] 00:25:43.341 bw ( KiB/s): min= 5376, max=22829, per=3.33%, avg=11138.00, stdev=4822.40, samples=10 00:25:43.341 iops : min= 42, max= 178, avg=86.90, stdev=37.56, samples=10 00:25:43.341 write: IOPS=88, BW=11.0MiB/s (11.6MB/s)(59.4MiB/5385msec); 0 zone resets 00:25:43.341 slat (nsec): min=15887, max=98994, avg=42157.71, stdev=16467.70 00:25:43.341 clat (msec): min=176, max=1056, avg=672.74, stdev=110.92 00:25:43.341 lat (msec): min=176, max=1056, avg=672.79, stdev=110.92 00:25:43.341 clat percentiles (msec): 00:25:43.341 | 1.00th=[ 275], 5.00th=[ 430], 10.00th=[ 567], 20.00th=[ 651], 00:25:43.341 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 701], 00:25:43.341 | 70.00th=[ 709], 80.00th=[ 718], 90.00th=[ 743], 95.00th=[ 810], 00:25:43.341 | 99.00th=[ 1020], 99.50th=[ 1045], 99.90th=[ 1053], 99.95th=[ 1053], 00:25:43.341 | 99.99th=[ 1053] 00:25:43.341 bw ( KiB/s): min= 5386, max=11520, per=3.15%, avg=10571.60, stdev=1842.89, samples=10 00:25:43.341 iops : min= 42, max= 90, avg=82.50, stdev=14.42, samples=10 00:25:43.341 lat (msec) : 50=39.91%, 100=3.29%, 250=5.04%, 500=3.51%, 750=44.96% 00:25:43.341 lat (msec) : 1000=2.52%, 2000=0.77% 00:25:43.341 cpu : usr=0.26%, sys=0.58%, ctx=509, majf=0, minf=1 00:25:43.341 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:25:43.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.341 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.341 issued rwts: total=437,475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.341 job18: (groupid=0, jobs=1): err= 0: pid=85360: Wed Jul 24 05:14:57 2024 00:25:43.341 read: IOPS=86, BW=10.8MiB/s (11.3MB/s)(58.1MiB/5395msec) 00:25:43.341 slat (usec): min=9, max=560, avg=39.95, stdev=61.30 00:25:43.341 clat (msec): min=22, max=422, avg=53.11, stdev=28.21 00:25:43.341 lat (msec): min=22, max=422, avg=53.15, stdev=28.21 00:25:43.341 clat percentiles (msec): 00:25:43.341 | 1.00th=[ 33], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.341 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:25:43.341 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 67], 95.00th=[ 101], 00:25:43.341 | 99.00th=[ 176], 99.50th=[ 176], 99.90th=[ 422], 99.95th=[ 422], 00:25:43.341 | 99.99th=[ 422] 00:25:43.341 bw ( KiB/s): min= 9472, max=18176, per=3.55%, avg=11876.30, stdev=2415.79, samples=10 00:25:43.341 iops : min= 74, max= 142, avg=92.70, stdev=18.94, samples=10 00:25:43.341 write: IOPS=88, BW=11.0MiB/s (11.5MB/s)(59.4MiB/5395msec); 0 zone resets 00:25:43.341 slat (usec): min=11, max=1073, avg=48.39, stdev=76.15 00:25:43.341 clat (msec): min=181, max=1068, avg=673.91, stdev=105.03 00:25:43.341 lat (msec): min=181, max=1069, avg=673.96, stdev=105.04 00:25:43.341 clat percentiles (msec): 00:25:43.341 | 1.00th=[ 284], 5.00th=[ 456], 10.00th=[ 584], 20.00th=[ 659], 00:25:43.341 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 693], 00:25:43.341 | 70.00th=[ 701], 80.00th=[ 709], 90.00th=[ 735], 95.00th=[ 810], 00:25:43.341 | 99.00th=[ 1011], 99.50th=[ 1036], 99.90th=[ 1070], 99.95th=[ 1070], 00:25:43.341 | 99.99th=[ 1070] 00:25:43.341 bw ( KiB/s): min= 5376, max=11520, per=3.15%, avg=10570.70, stdev=1850.29, samples=10 00:25:43.341 iops : min= 42, max= 90, avg=82.50, stdev=14.46, samples=10 00:25:43.342 lat (msec) : 50=42.66%, 100=4.36%, 250=2.77%, 500=3.09%, 750=44.04% 00:25:43.342 lat (msec) : 1000=2.23%, 2000=0.85% 00:25:43.342 cpu : usr=0.09%, sys=0.46%, ctx=873, majf=0, minf=1 00:25:43.342 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.3% 00:25:43.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.342 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.342 issued rwts: total=465,475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.342 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.342 job19: (groupid=0, jobs=1): err= 0: pid=85361: Wed Jul 24 05:14:57 2024 00:25:43.342 read: IOPS=97, BW=12.2MiB/s (12.8MB/s)(65.5MiB/5381msec) 00:25:43.342 slat (usec): min=9, max=216, avg=31.86, stdev=16.29 00:25:43.342 clat (msec): min=31, max=411, avg=57.98, stdev=41.19 00:25:43.342 lat (msec): min=31, max=411, avg=58.01, stdev=41.19 00:25:43.342 clat percentiles (msec): 00:25:43.342 | 1.00th=[ 41], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.342 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:25:43.342 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 96], 95.00th=[ 142], 00:25:43.342 | 99.00th=[ 199], 99.50th=[ 401], 99.90th=[ 414], 99.95th=[ 414], 00:25:43.342 | 99.99th=[ 414] 00:25:43.342 bw ( KiB/s): min= 8192, max=22316, per=3.98%, avg=13314.20, stdev=3727.04, samples=10 00:25:43.342 iops : min= 64, max= 174, avg=103.90, stdev=29.08, samples=10 00:25:43.342 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.0MiB/5381msec); 0 zone resets 00:25:43.342 slat (usec): min=15, max=100, avg=38.04, stdev=13.59 00:25:43.342 clat (msec): min=181, max=1058, avg=664.15, stdev=101.66 00:25:43.342 lat (msec): min=181, max=1058, avg=664.19, stdev=101.66 00:25:43.342 clat percentiles (msec): 00:25:43.342 | 1.00th=[ 284], 5.00th=[ 464], 10.00th=[ 567], 20.00th=[ 642], 00:25:43.342 | 30.00th=[ 659], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 684], 00:25:43.342 | 70.00th=[ 693], 80.00th=[ 701], 90.00th=[ 718], 95.00th=[ 751], 00:25:43.342 | 99.00th=[ 1003], 99.50th=[ 1045], 99.90th=[ 1062], 99.95th=[ 1062], 00:25:43.342 | 99.99th=[ 1062] 00:25:43.342 bw ( KiB/s): min= 5386, max=11520, per=3.15%, avg=10571.60, stdev=1846.50, samples=10 00:25:43.342 iops : min= 42, max= 90, avg=82.50, stdev=14.43, samples=10 00:25:43.342 lat (msec) : 50=45.18%, 100=2.61%, 250=4.82%, 500=2.91%, 750=42.07% 00:25:43.342 lat (msec) : 1000=1.91%, 2000=0.50% 00:25:43.342 cpu : usr=0.28%, sys=0.65%, ctx=518, majf=0, minf=1 00:25:43.342 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:25:43.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.342 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.342 issued rwts: total=524,472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.342 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.342 job20: (groupid=0, jobs=1): err= 0: pid=85362: Wed Jul 24 05:14:57 2024 00:25:43.342 read: IOPS=85, BW=10.7MiB/s (11.2MB/s)(57.5MiB/5380msec) 00:25:43.342 slat (nsec): min=9541, max=68375, avg=25426.84, stdev=9579.68 00:25:43.342 clat (msec): min=31, max=404, avg=56.30, stdev=39.64 00:25:43.342 lat (msec): min=31, max=404, avg=56.32, stdev=39.64 00:25:43.342 clat percentiles (msec): 00:25:43.342 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.342 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:25:43.342 | 70.00th=[ 47], 80.00th=[ 49], 90.00th=[ 77], 95.00th=[ 126], 00:25:43.342 | 99.00th=[ 165], 99.50th=[ 393], 99.90th=[ 405], 99.95th=[ 405], 00:25:43.342 | 99.99th=[ 405] 00:25:43.342 bw ( KiB/s): min= 8448, max=15872, per=3.48%, avg=11642.60, stdev=2470.49, samples=10 00:25:43.342 iops : min= 66, max= 124, avg=90.80, stdev=19.21, samples=10 00:25:43.342 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.0MiB/5380msec); 0 zone resets 00:25:43.342 slat (usec): min=15, max=662, avg=33.90, stdev=36.78 00:25:43.342 clat (msec): min=188, max=1074, avg=673.58, stdev=106.67 00:25:43.342 lat (msec): min=188, max=1074, avg=673.61, stdev=106.67 00:25:43.342 clat percentiles (msec): 00:25:43.342 | 1.00th=[ 284], 5.00th=[ 447], 10.00th=[ 592], 20.00th=[ 659], 00:25:43.342 | 30.00th=[ 676], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 693], 00:25:43.342 | 70.00th=[ 701], 80.00th=[ 709], 90.00th=[ 726], 95.00th=[ 776], 00:25:43.342 | 99.00th=[ 1028], 99.50th=[ 1053], 99.90th=[ 1083], 99.95th=[ 1083], 00:25:43.342 | 99.99th=[ 1083] 00:25:43.342 bw ( KiB/s): min= 5376, max=11520, per=3.15%, avg=10568.30, stdev=1845.07, samples=10 00:25:43.342 iops : min= 42, max= 90, avg=82.40, stdev=14.38, samples=10 00:25:43.342 lat (msec) : 50=42.92%, 100=2.58%, 250=3.86%, 500=3.22%, 750=44.10% 00:25:43.342 lat (msec) : 1000=2.58%, 2000=0.75% 00:25:43.342 cpu : usr=0.19%, sys=0.48%, ctx=505, majf=0, minf=1 00:25:43.342 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.2% 00:25:43.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.342 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.342 issued rwts: total=460,472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.342 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.342 job21: (groupid=0, jobs=1): err= 0: pid=85363: Wed Jul 24 05:14:57 2024 00:25:43.342 read: IOPS=79, BW=9.88MiB/s (10.4MB/s)(53.5MiB/5412msec) 00:25:43.342 slat (nsec): min=9425, max=91483, avg=31332.64, stdev=12332.00 00:25:43.342 clat (msec): min=5, max=444, avg=52.66, stdev=35.15 00:25:43.342 lat (msec): min=5, max=444, avg=52.69, stdev=35.15 00:25:43.342 clat percentiles (msec): 00:25:43.342 | 1.00th=[ 10], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.342 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:25:43.342 | 70.00th=[ 47], 80.00th=[ 48], 90.00th=[ 56], 95.00th=[ 118], 00:25:43.342 | 99.00th=[ 155], 99.50th=[ 199], 99.90th=[ 447], 99.95th=[ 447], 00:25:43.342 | 99.99th=[ 447] 00:25:43.342 bw ( KiB/s): min= 7424, max=15134, per=3.25%, avg=10883.00, stdev=2540.55, samples=10 00:25:43.342 iops : min= 58, max= 118, avg=85.00, stdev=19.80, samples=10 00:25:43.342 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.4MiB/5412msec); 0 zone resets 00:25:43.342 slat (usec): min=12, max=2765, avg=49.23, stdev=148.46 00:25:43.342 clat (msec): min=15, max=1083, avg=680.41, stdev=120.23 00:25:43.342 lat (msec): min=15, max=1083, avg=680.46, stdev=120.22 00:25:43.342 clat percentiles (msec): 00:25:43.342 | 1.00th=[ 211], 5.00th=[ 468], 10.00th=[ 584], 20.00th=[ 659], 00:25:43.342 | 30.00th=[ 676], 40.00th=[ 684], 50.00th=[ 693], 60.00th=[ 701], 00:25:43.342 | 70.00th=[ 709], 80.00th=[ 726], 90.00th=[ 743], 95.00th=[ 835], 00:25:43.342 | 99.00th=[ 1053], 99.50th=[ 1083], 99.90th=[ 1083], 99.95th=[ 1083], 00:25:43.342 | 99.99th=[ 1083] 00:25:43.342 bw ( KiB/s): min= 5643, max=11520, per=3.16%, avg=10599.50, stdev=1762.78, samples=10 00:25:43.342 iops : min= 44, max= 90, avg=82.80, stdev=13.80, samples=10 00:25:43.342 lat (msec) : 10=1.11%, 20=0.89%, 50=40.86%, 100=1.55%, 250=3.65% 00:25:43.342 lat (msec) : 500=2.66%, 750=44.52%, 1000=3.88%, 2000=0.89% 00:25:43.342 cpu : usr=0.26%, sys=0.61%, ctx=514, majf=0, minf=1 00:25:43.342 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:25:43.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.342 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.342 issued rwts: total=428,475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.342 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.343 job22: (groupid=0, jobs=1): err= 0: pid=85364: Wed Jul 24 05:14:57 2024 00:25:43.343 read: IOPS=102, BW=12.8MiB/s (13.4MB/s)(69.4MiB/5413msec) 00:25:43.343 slat (usec): min=6, max=1178, avg=33.34, stdev=58.58 00:25:43.343 clat (msec): min=2, max=437, avg=49.76, stdev=34.48 00:25:43.343 lat (msec): min=2, max=437, avg=49.79, stdev=34.49 00:25:43.343 clat percentiles (msec): 00:25:43.343 | 1.00th=[ 4], 5.00th=[ 26], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.343 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:25:43.343 | 70.00th=[ 47], 80.00th=[ 48], 90.00th=[ 49], 95.00th=[ 73], 00:25:43.343 | 99.00th=[ 207], 99.50th=[ 213], 99.90th=[ 439], 99.95th=[ 439], 00:25:43.343 | 99.99th=[ 439] 00:25:43.343 bw ( KiB/s): min=10240, max=19456, per=4.23%, avg=14156.80, stdev=2881.31, samples=10 00:25:43.343 iops : min= 80, max= 152, avg=110.60, stdev=22.51, samples=10 00:25:43.343 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.5MiB/5413msec); 0 zone resets 00:25:43.343 slat (usec): min=7, max=299, avg=36.57, stdev=31.11 00:25:43.343 clat (msec): min=6, max=1064, avg=668.66, stdev=122.62 00:25:43.343 lat (msec): min=6, max=1064, avg=668.70, stdev=122.62 00:25:43.343 clat percentiles (msec): 00:25:43.343 | 1.00th=[ 47], 5.00th=[ 451], 10.00th=[ 584], 20.00th=[ 659], 00:25:43.343 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 676], 60.00th=[ 684], 00:25:43.343 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 743], 95.00th=[ 827], 00:25:43.343 | 99.00th=[ 1011], 99.50th=[ 1045], 99.90th=[ 1062], 99.95th=[ 1062], 00:25:43.343 | 99.99th=[ 1062] 00:25:43.343 bw ( KiB/s): min= 6144, max=11520, per=3.17%, avg=10624.00, stdev=1593.02, samples=10 00:25:43.343 iops : min= 48, max= 90, avg=83.00, stdev=12.45, samples=10 00:25:43.343 lat (msec) : 4=0.78%, 10=1.26%, 20=0.19%, 50=48.79%, 100=1.55% 00:25:43.343 lat (msec) : 250=2.04%, 500=2.62%, 750=38.60%, 1000=3.69%, 2000=0.48% 00:25:43.343 cpu : usr=0.22%, sys=0.54%, ctx=599, majf=0, minf=1 00:25:43.343 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:25:43.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.343 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.343 issued rwts: total=555,476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.343 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.343 job23: (groupid=0, jobs=1): err= 0: pid=85365: Wed Jul 24 05:14:57 2024 00:25:43.343 read: IOPS=84, BW=10.5MiB/s (11.0MB/s)(56.8MiB/5394msec) 00:25:43.343 slat (nsec): min=9069, max=73174, avg=24168.46, stdev=9554.73 00:25:43.343 clat (msec): min=9, max=416, avg=55.92, stdev=41.58 00:25:43.343 lat (msec): min=9, max=416, avg=55.94, stdev=41.58 00:25:43.343 clat percentiles (msec): 00:25:43.343 | 1.00th=[ 17], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.343 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.343 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 74], 95.00th=[ 108], 00:25:43.343 | 99.00th=[ 205], 99.50th=[ 405], 99.90th=[ 418], 99.95th=[ 418], 00:25:43.343 | 99.99th=[ 418] 00:25:43.343 bw ( KiB/s): min= 7936, max=19672, per=3.44%, avg=11514.30, stdev=3640.06, samples=10 00:25:43.343 iops : min= 62, max= 153, avg=89.80, stdev=28.35, samples=10 00:25:43.343 write: IOPS=87, BW=10.9MiB/s (11.5MB/s)(59.0MiB/5394msec); 0 zone resets 00:25:43.343 slat (nsec): min=13018, max=95689, avg=30146.49, stdev=10080.49 00:25:43.343 clat (msec): min=187, max=1041, avg=676.56, stdev=102.60 00:25:43.343 lat (msec): min=187, max=1041, avg=676.59, stdev=102.60 00:25:43.343 clat percentiles (msec): 00:25:43.343 | 1.00th=[ 288], 5.00th=[ 456], 10.00th=[ 584], 20.00th=[ 659], 00:25:43.343 | 30.00th=[ 676], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 693], 00:25:43.343 | 70.00th=[ 709], 80.00th=[ 718], 90.00th=[ 743], 95.00th=[ 776], 00:25:43.343 | 99.00th=[ 1028], 99.50th=[ 1045], 99.90th=[ 1045], 99.95th=[ 1045], 00:25:43.343 | 99.99th=[ 1045] 00:25:43.343 bw ( KiB/s): min= 5109, max=11520, per=3.14%, avg=10543.90, stdev=1925.76, samples=10 00:25:43.343 iops : min= 39, max= 90, avg=82.20, stdev=15.32, samples=10 00:25:43.343 lat (msec) : 10=0.32%, 20=0.32%, 50=41.36%, 100=4.00%, 250=3.02% 00:25:43.343 lat (msec) : 500=3.35%, 750=44.60%, 1000=2.38%, 2000=0.65% 00:25:43.343 cpu : usr=0.17%, sys=0.41%, ctx=516, majf=0, minf=1 00:25:43.343 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:25:43.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.343 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.343 issued rwts: total=454,472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.343 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.343 job24: (groupid=0, jobs=1): err= 0: pid=85366: Wed Jul 24 05:14:57 2024 00:25:43.343 read: IOPS=91, BW=11.5MiB/s (12.0MB/s)(61.9MiB/5388msec) 00:25:43.343 slat (usec): min=10, max=605, avg=31.35, stdev=28.84 00:25:43.343 clat (msec): min=33, max=191, avg=53.67, stdev=25.61 00:25:43.343 lat (msec): min=33, max=191, avg=53.70, stdev=25.61 00:25:43.343 clat percentiles (msec): 00:25:43.343 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.343 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.343 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 68], 95.00th=[ 112], 00:25:43.343 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 192], 00:25:43.343 | 99.99th=[ 192] 00:25:43.343 bw ( KiB/s): min=10240, max=18432, per=3.79%, avg=12669.40, stdev=2402.11, samples=10 00:25:43.343 iops : min= 80, max= 144, avg=98.90, stdev=18.76, samples=10 00:25:43.343 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(59.6MiB/5388msec); 0 zone resets 00:25:43.343 slat (usec): min=14, max=4123, avg=44.33, stdev=187.83 00:25:43.343 clat (msec): min=178, max=1041, avg=665.67, stdev=105.72 00:25:43.343 lat (msec): min=182, max=1041, avg=665.72, stdev=105.68 00:25:43.343 clat percentiles (msec): 00:25:43.343 | 1.00th=[ 279], 5.00th=[ 443], 10.00th=[ 567], 20.00th=[ 642], 00:25:43.343 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 684], 00:25:43.343 | 70.00th=[ 693], 80.00th=[ 701], 90.00th=[ 718], 95.00th=[ 760], 00:25:43.343 | 99.00th=[ 1011], 99.50th=[ 1020], 99.90th=[ 1045], 99.95th=[ 1045], 00:25:43.343 | 99.99th=[ 1045] 00:25:43.343 bw ( KiB/s): min= 5376, max=11520, per=3.15%, avg=10570.60, stdev=1846.02, samples=10 00:25:43.343 iops : min= 42, max= 90, avg=82.50, stdev=14.42, samples=10 00:25:43.343 lat (msec) : 50=44.55%, 100=3.09%, 250=3.70%, 500=3.09%, 750=43.11% 00:25:43.343 lat (msec) : 1000=1.75%, 2000=0.72% 00:25:43.343 cpu : usr=0.26%, sys=0.61%, ctx=528, majf=0, minf=1 00:25:43.343 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5% 00:25:43.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.343 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.343 issued rwts: total=495,477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.343 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.343 job25: (groupid=0, jobs=1): err= 0: pid=85367: Wed Jul 24 05:14:57 2024 00:25:43.343 read: IOPS=90, BW=11.3MiB/s (11.8MB/s)(60.8MiB/5386msec) 00:25:43.343 slat (usec): min=9, max=370, avg=32.27, stdev=26.53 00:25:43.343 clat (msec): min=30, max=414, avg=56.46, stdev=41.32 00:25:43.343 lat (msec): min=30, max=414, avg=56.49, stdev=41.32 00:25:43.343 clat percentiles (msec): 00:25:43.343 | 1.00th=[ 31], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.343 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.343 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 68], 95.00th=[ 124], 00:25:43.343 | 99.00th=[ 205], 99.50th=[ 393], 99.90th=[ 414], 99.95th=[ 414], 00:25:43.343 | 99.99th=[ 414] 00:25:43.343 bw ( KiB/s): min= 9472, max=18432, per=3.69%, avg=12336.90, stdev=2647.21, samples=10 00:25:43.343 iops : min= 74, max= 144, avg=96.30, stdev=20.72, samples=10 00:25:43.343 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.0MiB/5386msec); 0 zone resets 00:25:43.343 slat (usec): min=12, max=305, avg=40.16, stdev=28.76 00:25:43.343 clat (msec): min=184, max=1050, avg=671.20, stdev=104.15 00:25:43.343 lat (msec): min=184, max=1050, avg=671.24, stdev=104.15 00:25:43.343 clat percentiles (msec): 00:25:43.343 | 1.00th=[ 284], 5.00th=[ 460], 10.00th=[ 584], 20.00th=[ 651], 00:25:43.343 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 693], 00:25:43.343 | 70.00th=[ 701], 80.00th=[ 709], 90.00th=[ 718], 95.00th=[ 802], 00:25:43.343 | 99.00th=[ 995], 99.50th=[ 1020], 99.90th=[ 1053], 99.95th=[ 1053], 00:25:43.343 | 99.99th=[ 1053] 00:25:43.343 bw ( KiB/s): min= 5376, max=11520, per=3.15%, avg=10570.60, stdev=1849.62, samples=10 00:25:43.343 iops : min= 42, max= 90, avg=82.50, stdev=14.43, samples=10 00:25:43.343 lat (msec) : 50=44.15%, 100=2.82%, 250=3.76%, 500=3.44%, 750=42.80% 00:25:43.343 lat (msec) : 1000=2.71%, 2000=0.31% 00:25:43.343 cpu : usr=0.19%, sys=0.54%, ctx=596, majf=0, minf=1 00:25:43.343 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4% 00:25:43.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.343 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.343 issued rwts: total=486,472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.343 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.343 job26: (groupid=0, jobs=1): err= 0: pid=85368: Wed Jul 24 05:14:57 2024 00:25:43.343 read: IOPS=80, BW=10.1MiB/s (10.6MB/s)(54.4MiB/5388msec) 00:25:43.343 slat (usec): min=10, max=238, avg=31.19, stdev=28.87 00:25:43.343 clat (msec): min=31, max=415, avg=58.32, stdev=42.14 00:25:43.343 lat (msec): min=31, max=415, avg=58.35, stdev=42.14 00:25:43.343 clat percentiles (msec): 00:25:43.343 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.343 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:25:43.343 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 88], 95.00th=[ 153], 00:25:43.343 | 99.00th=[ 213], 99.50th=[ 393], 99.90th=[ 418], 99.95th=[ 418], 00:25:43.343 | 99.99th=[ 418] 00:25:43.343 bw ( KiB/s): min= 7936, max=18725, per=3.31%, avg=11061.10, stdev=2908.09, samples=10 00:25:43.343 iops : min= 62, max= 146, avg=86.30, stdev=22.71, samples=10 00:25:43.343 write: IOPS=87, BW=10.9MiB/s (11.5MB/s)(58.9MiB/5388msec); 0 zone resets 00:25:43.343 slat (usec): min=14, max=10051, avg=58.61, stdev=462.30 00:25:43.343 clat (msec): min=181, max=1051, avg=675.81, stdev=106.92 00:25:43.343 lat (msec): min=184, max=1051, avg=675.87, stdev=106.83 00:25:43.343 clat percentiles (msec): 00:25:43.343 | 1.00th=[ 279], 5.00th=[ 451], 10.00th=[ 575], 20.00th=[ 659], 00:25:43.343 | 30.00th=[ 667], 40.00th=[ 684], 50.00th=[ 684], 60.00th=[ 701], 00:25:43.343 | 70.00th=[ 709], 80.00th=[ 718], 90.00th=[ 735], 95.00th=[ 802], 00:25:43.344 | 99.00th=[ 1028], 99.50th=[ 1045], 99.90th=[ 1053], 99.95th=[ 1053], 00:25:43.344 | 99.99th=[ 1053] 00:25:43.344 bw ( KiB/s): min= 5130, max=11520, per=3.13%, avg=10520.40, stdev=1910.22, samples=10 00:25:43.344 iops : min= 40, max= 90, avg=82.10, stdev=14.94, samples=10 00:25:43.344 lat (msec) : 50=40.18%, 100=3.75%, 250=4.19%, 500=3.53%, 750=45.03% 00:25:43.344 lat (msec) : 1000=2.76%, 2000=0.55% 00:25:43.344 cpu : usr=0.11%, sys=0.39%, ctx=869, majf=0, minf=1 00:25:43.344 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:25:43.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.344 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.344 issued rwts: total=435,471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.344 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.344 job27: (groupid=0, jobs=1): err= 0: pid=85369: Wed Jul 24 05:14:57 2024 00:25:43.344 read: IOPS=82, BW=10.4MiB/s (10.9MB/s)(55.9MiB/5393msec) 00:25:43.344 slat (nsec): min=6608, max=93440, avg=25110.70, stdev=10638.97 00:25:43.344 clat (msec): min=7, max=425, avg=57.64, stdev=39.67 00:25:43.344 lat (msec): min=7, max=425, avg=57.66, stdev=39.67 00:25:43.344 clat percentiles (msec): 00:25:43.344 | 1.00th=[ 15], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 46], 00:25:43.344 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.344 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 96], 95.00th=[ 129], 00:25:43.344 | 99.00th=[ 201], 99.50th=[ 405], 99.90th=[ 426], 99.95th=[ 426], 00:25:43.344 | 99.99th=[ 426] 00:25:43.344 bw ( KiB/s): min= 7936, max=22316, per=3.40%, avg=11368.00, stdev=4272.81, samples=10 00:25:43.344 iops : min= 62, max= 174, avg=88.70, stdev=33.24, samples=10 00:25:43.344 write: IOPS=87, BW=10.9MiB/s (11.5MB/s)(59.0MiB/5393msec); 0 zone resets 00:25:43.344 slat (nsec): min=10550, max=77546, avg=31395.12, stdev=10974.17 00:25:43.344 clat (msec): min=185, max=1084, avg=675.68, stdev=104.47 00:25:43.344 lat (msec): min=185, max=1084, avg=675.72, stdev=104.47 00:25:43.344 clat percentiles (msec): 00:25:43.344 | 1.00th=[ 288], 5.00th=[ 460], 10.00th=[ 600], 20.00th=[ 659], 00:25:43.344 | 30.00th=[ 676], 40.00th=[ 684], 50.00th=[ 693], 60.00th=[ 693], 00:25:43.344 | 70.00th=[ 701], 80.00th=[ 709], 90.00th=[ 718], 95.00th=[ 776], 00:25:43.344 | 99.00th=[ 1028], 99.50th=[ 1036], 99.90th=[ 1083], 99.95th=[ 1083], 00:25:43.344 | 99.99th=[ 1083] 00:25:43.344 bw ( KiB/s): min= 5130, max=11520, per=3.14%, avg=10546.00, stdev=1922.97, samples=10 00:25:43.344 iops : min= 40, max= 90, avg=82.30, stdev=15.04, samples=10 00:25:43.344 lat (msec) : 10=0.33%, 20=0.33%, 50=39.83%, 100=3.59%, 250=4.68% 00:25:43.344 lat (msec) : 500=3.16%, 750=45.27%, 1000=2.07%, 2000=0.76% 00:25:43.344 cpu : usr=0.22%, sys=0.39%, ctx=524, majf=0, minf=1 00:25:43.344 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.1% 00:25:43.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.344 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.344 issued rwts: total=447,472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.344 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.344 job28: (groupid=0, jobs=1): err= 0: pid=85370: Wed Jul 24 05:14:57 2024 00:25:43.344 read: IOPS=100, BW=12.6MiB/s (13.2MB/s)(67.9MiB/5385msec) 00:25:43.344 slat (usec): min=9, max=393, avg=28.37, stdev=22.95 00:25:43.344 clat (msec): min=9, max=407, avg=56.77, stdev=42.92 00:25:43.344 lat (msec): min=9, max=407, avg=56.80, stdev=42.92 00:25:43.344 clat percentiles (msec): 00:25:43.344 | 1.00th=[ 17], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.344 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:25:43.344 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 79], 95.00th=[ 124], 00:25:43.344 | 99.00th=[ 222], 99.50th=[ 397], 99.90th=[ 409], 99.95th=[ 409], 00:25:43.344 | 99.99th=[ 409] 00:25:43.344 bw ( KiB/s): min= 9984, max=24271, per=4.11%, avg=13739.80, stdev=3897.07, samples=10 00:25:43.344 iops : min= 78, max= 189, avg=107.20, stdev=30.30, samples=10 00:25:43.344 write: IOPS=87, BW=10.9MiB/s (11.4MB/s)(58.8MiB/5385msec); 0 zone resets 00:25:43.344 slat (usec): min=14, max=1155, avg=36.46, stdev=54.42 00:25:43.344 clat (msec): min=187, max=1054, avg=666.56, stdev=104.57 00:25:43.344 lat (msec): min=187, max=1054, avg=666.60, stdev=104.56 00:25:43.344 clat percentiles (msec): 00:25:43.344 | 1.00th=[ 288], 5.00th=[ 451], 10.00th=[ 584], 20.00th=[ 651], 00:25:43.344 | 30.00th=[ 659], 40.00th=[ 667], 50.00th=[ 676], 60.00th=[ 684], 00:25:43.344 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 718], 95.00th=[ 802], 00:25:43.344 | 99.00th=[ 986], 99.50th=[ 1036], 99.90th=[ 1053], 99.95th=[ 1053], 00:25:43.344 | 99.99th=[ 1053] 00:25:43.344 bw ( KiB/s): min= 5109, max=11520, per=3.14%, avg=10543.90, stdev=1929.54, samples=10 00:25:43.344 iops : min= 39, max= 90, avg=82.20, stdev=15.35, samples=10 00:25:43.344 lat (msec) : 10=0.20%, 20=0.39%, 50=45.61%, 100=3.55%, 250=3.75% 00:25:43.344 lat (msec) : 500=3.26%, 750=40.47%, 1000=2.47%, 2000=0.30% 00:25:43.344 cpu : usr=0.20%, sys=0.52%, ctx=678, majf=0, minf=1 00:25:43.344 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:25:43.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.344 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.344 issued rwts: total=543,470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.344 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.344 job29: (groupid=0, jobs=1): err= 0: pid=85371: Wed Jul 24 05:14:57 2024 00:25:43.344 read: IOPS=92, BW=11.6MiB/s (12.2MB/s)(62.6MiB/5394msec) 00:25:43.344 slat (usec): min=9, max=153, avg=26.49, stdev=14.47 00:25:43.344 clat (msec): min=32, max=419, avg=58.72, stdev=37.79 00:25:43.344 lat (msec): min=32, max=419, avg=58.75, stdev=37.79 00:25:43.344 clat percentiles (msec): 00:25:43.344 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 45], 00:25:43.344 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:25:43.344 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 101], 95.00th=[ 144], 00:25:43.344 | 99.00th=[ 194], 99.50th=[ 211], 99.90th=[ 418], 99.95th=[ 418], 00:25:43.344 | 99.99th=[ 418] 00:25:43.344 bw ( KiB/s): min= 7936, max=26164, per=3.82%, avg=12777.30, stdev=5040.57, samples=10 00:25:43.344 iops : min= 62, max= 204, avg=99.70, stdev=39.29, samples=10 00:25:43.344 write: IOPS=87, BW=11.0MiB/s (11.5MB/s)(59.2MiB/5394msec); 0 zone resets 00:25:43.344 slat (usec): min=15, max=102, avg=32.77, stdev=14.56 00:25:43.344 clat (msec): min=186, max=1077, avg=665.17, stdev=109.07 00:25:43.344 lat (msec): min=186, max=1077, avg=665.20, stdev=109.07 00:25:43.344 clat percentiles (msec): 00:25:43.344 | 1.00th=[ 288], 5.00th=[ 456], 10.00th=[ 542], 20.00th=[ 642], 00:25:43.344 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 684], 00:25:43.344 | 70.00th=[ 693], 80.00th=[ 701], 90.00th=[ 726], 95.00th=[ 793], 00:25:43.344 | 99.00th=[ 1045], 99.50th=[ 1053], 99.90th=[ 1083], 99.95th=[ 1083], 00:25:43.344 | 99.99th=[ 1083] 00:25:43.344 bw ( KiB/s): min= 5130, max=11520, per=3.14%, avg=10546.00, stdev=1919.17, samples=10 00:25:43.344 iops : min= 40, max= 90, avg=82.30, stdev=15.01, samples=10 00:25:43.344 lat (msec) : 50=42.36%, 100=3.79%, 250=5.44%, 500=2.97%, 750=42.26% 00:25:43.344 lat (msec) : 1000=2.46%, 2000=0.72% 00:25:43.344 cpu : usr=0.07%, sys=0.56%, ctx=670, majf=0, minf=1 00:25:43.344 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5% 00:25:43.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.344 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.344 issued rwts: total=501,474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.344 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.344 00:25:43.344 Run status group 0 (all jobs): 00:25:43.344 READ: bw=327MiB/s (342MB/s), 9963KiB/s-12.8MiB/s (10.2MB/s-13.4MB/s), io=1768MiB (1854MB), run=5370-5415msec 00:25:43.344 WRITE: bw=328MiB/s (344MB/s), 10.9MiB/s-11.1MiB/s (11.4MB/s-11.6MB/s), io=1775MiB (1861MB), run=5370-5415msec 00:25:43.344 00:25:43.344 Disk stats (read/write): 00:25:43.344 sda: ios=485/465, merge=0/0, ticks=22554/306800, in_queue=329355, util=91.02% 00:25:43.344 sdb: ios=467/465, merge=0/0, ticks=22738/306580, in_queue=329318, util=91.44% 00:25:43.344 sdc: ios=530/464, merge=0/0, ticks=25848/302874, in_queue=328723, util=91.74% 00:25:43.344 sdd: ios=493/465, merge=0/0, ticks=24355/304142, in_queue=328497, util=91.75% 00:25:43.344 sde: ios=507/465, merge=0/0, ticks=24961/303920, in_queue=328881, util=91.45% 00:25:43.344 sdf: ios=532/465, merge=0/0, ticks=26009/302879, in_queue=328888, util=92.28% 00:25:43.344 sdg: ios=498/469, merge=0/0, ticks=23708/307013, in_queue=330721, util=92.76% 00:25:43.344 sdh: ios=434/465, merge=0/0, ticks=22958/306194, in_queue=329153, util=92.07% 00:25:43.344 sdi: ios=431/469, merge=0/0, ticks=21193/309638, in_queue=330832, util=92.87% 00:25:43.344 sdj: ios=496/466, merge=0/0, ticks=26366/302911, in_queue=329278, util=91.99% 00:25:43.344 sdk: ios=510/465, merge=0/0, ticks=27241/301598, in_queue=328839, util=92.87% 00:25:43.344 sdl: ios=462/465, merge=0/0, ticks=25635/302941, in_queue=328577, util=93.59% 00:25:43.344 sdm: ios=519/465, merge=0/0, ticks=27964/300936, in_queue=328901, util=93.46% 00:25:43.344 sdn: ios=483/465, merge=0/0, ticks=26125/303040, in_queue=329166, util=93.99% 00:25:43.344 sdo: ios=455/464, merge=0/0, ticks=24587/304192, in_queue=328780, util=93.88% 00:25:43.344 sdp: ios=450/464, merge=0/0, ticks=24752/303916, in_queue=328669, util=94.45% 00:25:43.344 sdq: ios=497/464, merge=0/0, ticks=27465/300517, in_queue=327983, util=94.23% 00:25:43.344 sdr: ios=437/465, merge=0/0, ticks=24225/304516, in_queue=328742, util=94.93% 00:25:43.344 sds: ios=465/465, merge=0/0, ticks=24065/304392, in_queue=328457, util=94.66% 00:25:43.344 sdt: ios=524/464, merge=0/0, ticks=28900/299408, in_queue=328309, util=95.41% 00:25:43.344 sdu: ios=460/465, merge=0/0, ticks=24383/304279, in_queue=328663, util=95.24% 00:25:43.344 sdv: ios=428/469, merge=0/0, ticks=21708/308910, in_queue=330618, util=96.43% 00:25:43.344 sdw: ios=555/471, merge=0/0, ticks=26761/304495, in_queue=331257, util=96.66% 00:25:43.344 sdx: ios=454/465, merge=0/0, ticks=23905/305727, in_queue=329633, util=96.44% 00:25:43.344 sdy: ios=495/465, merge=0/0, ticks=26510/301713, in_queue=328224, util=95.93% 00:25:43.344 sdz: ios=486/465, merge=0/0, ticks=25918/303118, in_queue=329037, util=96.27% 00:25:43.344 sdaa: ios=435/464, merge=0/0, ticks=24117/304269, in_queue=328387, util=96.33% 00:25:43.344 sdab: ios=447/465, merge=0/0, ticks=24635/304920, in_queue=329555, util=96.87% 00:25:43.344 sdac: ios=543/465, merge=0/0, ticks=28963/300754, in_queue=329717, util=96.97% 00:25:43.344 sdad: ios=501/465, merge=0/0, ticks=28594/300520, in_queue=329114, util=97.32% 00:25:43.344 [2024-07-24 05:14:57.353357] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.345 [2024-07-24 05:14:57.356628] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.345 [2024-07-24 05:14:57.359837] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.345 [2024-07-24 05:14:57.363350] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.345 [2024-07-24 05:14:57.367034] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.345 [2024-07-24 05:14:57.370400] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.345 [2024-07-24 05:14:57.373136] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.345 [2024-07-24 05:14:57.376116] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.345 [2024-07-24 05:14:57.379700] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.345 [2024-07-24 05:14:57.385357] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.345 [2024-07-24 05:14:57.388231] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.345 05:14:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 262144 -d 16 -t randwrite -r 10 00:25:43.345 [2024-07-24 05:14:57.390850] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.345 [2024-07-24 05:14:57.394429] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.345 [global] 00:25:43.345 thread=1 00:25:43.345 invalidate=1 00:25:43.345 rw=randwrite 00:25:43.345 time_based=1 00:25:43.345 runtime=10 00:25:43.345 ioengine=libaio 00:25:43.345 direct=1 00:25:43.345 bs=262144 00:25:43.345 iodepth=16 00:25:43.345 norandommap=1 00:25:43.345 numjobs=1 00:25:43.345 00:25:43.345 [job0] 00:25:43.345 filename=/dev/sda 00:25:43.345 [job1] 00:25:43.345 filename=/dev/sdb 00:25:43.345 [job2] 00:25:43.345 filename=/dev/sdc 00:25:43.345 [job3] 00:25:43.345 filename=/dev/sdd 00:25:43.345 [job4] 00:25:43.345 filename=/dev/sde 00:25:43.345 [job5] 00:25:43.345 filename=/dev/sdf 00:25:43.345 [job6] 00:25:43.345 filename=/dev/sdg 00:25:43.345 [job7] 00:25:43.345 filename=/dev/sdh 00:25:43.345 [job8] 00:25:43.345 filename=/dev/sdi 00:25:43.345 [job9] 00:25:43.345 filename=/dev/sdj 00:25:43.345 [job10] 00:25:43.345 filename=/dev/sdk 00:25:43.345 [job11] 00:25:43.345 filename=/dev/sdl 00:25:43.345 [job12] 00:25:43.345 filename=/dev/sdm 00:25:43.345 [job13] 00:25:43.345 filename=/dev/sdn 00:25:43.345 [job14] 00:25:43.345 filename=/dev/sdo 00:25:43.345 [job15] 00:25:43.345 filename=/dev/sdp 00:25:43.345 [job16] 00:25:43.345 filename=/dev/sdq 00:25:43.345 [job17] 00:25:43.345 filename=/dev/sdr 00:25:43.345 [job18] 00:25:43.345 filename=/dev/sds 00:25:43.345 [job19] 00:25:43.345 filename=/dev/sdt 00:25:43.345 [job20] 00:25:43.345 filename=/dev/sdu 00:25:43.345 [job21] 00:25:43.345 filename=/dev/sdv 00:25:43.345 [job22] 00:25:43.345 filename=/dev/sdw 00:25:43.345 [job23] 00:25:43.345 filename=/dev/sdx 00:25:43.345 [job24] 00:25:43.345 filename=/dev/sdy 00:25:43.345 [job25] 00:25:43.345 filename=/dev/sdz 00:25:43.345 [job26] 00:25:43.345 filename=/dev/sdaa 00:25:43.345 [job27] 00:25:43.345 filename=/dev/sdab 00:25:43.345 [job28] 00:25:43.345 filename=/dev/sdac 00:25:43.345 [job29] 00:25:43.345 filename=/dev/sdad 00:25:43.604 queue_depth set to 113 (sda) 00:25:43.604 queue_depth set to 113 (sdb) 00:25:43.604 queue_depth set to 113 (sdc) 00:25:43.604 queue_depth set to 113 (sdd) 00:25:43.604 queue_depth set to 113 (sde) 00:25:43.604 queue_depth set to 113 (sdf) 00:25:43.604 queue_depth set to 113 (sdg) 00:25:43.604 queue_depth set to 113 (sdh) 00:25:43.604 queue_depth set to 113 (sdi) 00:25:43.604 queue_depth set to 113 (sdj) 00:25:43.604 queue_depth set to 113 (sdk) 00:25:43.604 queue_depth set to 113 (sdl) 00:25:43.604 queue_depth set to 113 (sdm) 00:25:43.604 queue_depth set to 113 (sdn) 00:25:43.604 queue_depth set to 113 (sdo) 00:25:43.604 queue_depth set to 113 (sdp) 00:25:43.604 queue_depth set to 113 (sdq) 00:25:43.604 queue_depth set to 113 (sdr) 00:25:43.604 queue_depth set to 113 (sds) 00:25:43.604 queue_depth set to 113 (sdt) 00:25:43.604 queue_depth set to 113 (sdu) 00:25:43.604 queue_depth set to 113 (sdv) 00:25:43.604 queue_depth set to 113 (sdw) 00:25:43.604 queue_depth set to 113 (sdx) 00:25:43.604 queue_depth set to 113 (sdy) 00:25:43.604 queue_depth set to 113 (sdz) 00:25:43.604 queue_depth set to 113 (sdaa) 00:25:43.604 queue_depth set to 113 (sdab) 00:25:43.604 queue_depth set to 113 (sdac) 00:25:43.604 queue_depth set to 113 (sdad) 00:25:43.863 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job11: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job12: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job13: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job14: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job15: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job16: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job17: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job18: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job19: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job20: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job21: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job22: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job23: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job24: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job25: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job26: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job27: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job28: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 job29: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:25:43.863 fio-3.35 00:25:43.863 Starting 30 threads 00:25:43.863 [2024-07-24 05:14:58.340012] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.347939] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.352099] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.354940] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.357583] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.360323] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.363093] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.365785] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.368434] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.371071] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.373649] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.376346] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.378961] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.381509] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.384130] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.386787] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.389519] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.392175] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.863 [2024-07-24 05:14:58.394928] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.864 [2024-07-24 05:14:58.397753] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.864 [2024-07-24 05:14:58.400314] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.864 [2024-07-24 05:14:58.403066] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.864 [2024-07-24 05:14:58.405733] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.864 [2024-07-24 05:14:58.408411] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.864 [2024-07-24 05:14:58.411208] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.864 [2024-07-24 05:14:58.413858] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.864 [2024-07-24 05:14:58.416608] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.864 [2024-07-24 05:14:58.419372] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.864 [2024-07-24 05:14:58.422139] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:43.864 [2024-07-24 05:14:58.424783] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.069 [2024-07-24 05:15:09.229411] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.069 [2024-07-24 05:15:09.241210] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.069 [2024-07-24 05:15:09.246276] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.069 [2024-07-24 05:15:09.250280] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.069 [2024-07-24 05:15:09.253899] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.069 [2024-07-24 05:15:09.258308] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.070 [2024-07-24 05:15:09.261174] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.070 [2024-07-24 05:15:09.264755] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.070 [2024-07-24 05:15:09.267895] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.070 [2024-07-24 05:15:09.271137] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.070 [2024-07-24 05:15:09.274366] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.070 [2024-07-24 05:15:09.277754] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.070 [2024-07-24 05:15:09.281061] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.070 00:25:56.070 job0: (groupid=0, jobs=1): err= 0: pid=85869: Wed Jul 24 05:15:09 2024 00:25:56.070 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(192MiB/10201msec); 0 zone resets 00:25:56.070 slat (usec): min=16, max=265, avg=59.98, stdev=21.32 00:25:56.070 clat (msec): min=9, max=390, avg=212.20, stdev=24.36 00:25:56.070 lat (msec): min=9, max=390, avg=212.26, stdev=24.36 00:25:56.070 clat percentiles (msec): 00:25:56.070 | 1.00th=[ 83], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.070 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.070 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.070 | 99.00th=[ 300], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:25:56.070 | 99.99th=[ 393] 00:25:56.070 bw ( KiB/s): min=18432, max=20008, per=3.34%, avg=19276.90, stdev=452.93, samples=20 00:25:56.070 iops : min= 72, max= 78, avg=75.25, stdev= 1.80, samples=20 00:25:56.070 lat (msec) : 10=0.13%, 20=0.13%, 50=0.39%, 100=0.52%, 250=97.40% 00:25:56.070 lat (msec) : 500=1.43% 00:25:56.070 cpu : usr=0.19%, sys=0.33%, ctx=782, majf=0, minf=1 00:25:56.070 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.070 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.070 issued rwts: total=0,768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.070 job1: (groupid=0, jobs=1): err= 0: pid=85870: Wed Jul 24 05:15:09 2024 00:25:56.070 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10190msec); 0 zone resets 00:25:56.070 slat (usec): min=19, max=3040, avg=69.18, stdev=108.96 00:25:56.070 clat (msec): min=21, max=384, avg=212.73, stdev=21.32 00:25:56.070 lat (msec): min=24, max=385, avg=212.80, stdev=21.29 00:25:56.070 clat percentiles (msec): 00:25:56.070 | 1.00th=[ 114], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.070 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.070 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.070 | 99.00th=[ 292], 99.50th=[ 347], 99.90th=[ 384], 99.95th=[ 384], 00:25:56.070 | 99.99th=[ 384] 00:25:56.070 bw ( KiB/s): min=18395, max=20008, per=3.33%, avg=19198.25, stdev=432.25, samples=20 00:25:56.070 iops : min= 71, max= 78, avg=74.90, stdev= 1.80, samples=20 00:25:56.070 lat (msec) : 50=0.39%, 100=0.39%, 250=97.78%, 500=1.44% 00:25:56.070 cpu : usr=0.30%, sys=0.32%, ctx=776, majf=0, minf=1 00:25:56.070 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.070 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.070 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.070 job2: (groupid=0, jobs=1): err= 0: pid=85871: Wed Jul 24 05:15:09 2024 00:25:56.070 write: IOPS=75, BW=18.9MiB/s (19.8MB/s)(193MiB/10204msec); 0 zone resets 00:25:56.070 slat (usec): min=23, max=330, avg=61.53, stdev=30.14 00:25:56.070 clat (msec): min=2, max=390, avg=211.43, stdev=27.34 00:25:56.070 lat (msec): min=2, max=390, avg=211.49, stdev=27.34 00:25:56.070 clat percentiles (msec): 00:25:56.070 | 1.00th=[ 47], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.070 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.070 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.070 | 99.00th=[ 296], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:25:56.070 | 99.99th=[ 393] 00:25:56.070 bw ( KiB/s): min=18944, max=21461, per=3.35%, avg=19323.90, stdev=563.44, samples=20 00:25:56.070 iops : min= 74, max= 83, avg=75.40, stdev= 2.04, samples=20 00:25:56.070 lat (msec) : 4=0.13%, 10=0.39%, 20=0.13%, 50=0.39%, 100=0.52% 00:25:56.070 lat (msec) : 250=97.02%, 500=1.43% 00:25:56.070 cpu : usr=0.20%, sys=0.32%, ctx=820, majf=0, minf=1 00:25:56.070 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.1%, 32=0.0%, >=64=0.0% 00:25:56.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.070 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.070 issued rwts: total=0,771,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.070 job3: (groupid=0, jobs=1): err= 0: pid=85873: Wed Jul 24 05:15:09 2024 00:25:56.070 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10182msec); 0 zone resets 00:25:56.070 slat (usec): min=17, max=1185, avg=62.84, stdev=46.43 00:25:56.070 clat (msec): min=23, max=377, avg=212.65, stdev=20.59 00:25:56.070 lat (msec): min=23, max=377, avg=212.71, stdev=20.59 00:25:56.070 clat percentiles (msec): 00:25:56.070 | 1.00th=[ 116], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.070 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.070 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.070 | 99.00th=[ 284], 99.50th=[ 338], 99.90th=[ 380], 99.95th=[ 380], 00:25:56.070 | 99.99th=[ 380] 00:25:56.070 bw ( KiB/s): min=18468, max=19968, per=3.33%, avg=19196.00, stdev=347.09, samples=20 00:25:56.070 iops : min= 72, max= 78, avg=74.85, stdev= 1.39, samples=20 00:25:56.070 lat (msec) : 50=0.39%, 100=0.39%, 250=97.91%, 500=1.31% 00:25:56.070 cpu : usr=0.24%, sys=0.40%, ctx=776, majf=0, minf=1 00:25:56.070 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.070 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.070 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.070 job4: (groupid=0, jobs=1): err= 0: pid=85904: Wed Jul 24 05:15:09 2024 00:25:56.070 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10189msec); 0 zone resets 00:25:56.070 slat (usec): min=24, max=217, avg=52.12, stdev=20.04 00:25:56.070 clat (msec): min=20, max=388, avg=212.79, stdev=21.66 00:25:56.070 lat (msec): min=20, max=388, avg=212.84, stdev=21.66 00:25:56.070 clat percentiles (msec): 00:25:56.070 | 1.00th=[ 113], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.070 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.070 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.070 | 99.00th=[ 296], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 388], 00:25:56.070 | 99.99th=[ 388] 00:25:56.070 bw ( KiB/s): min=18944, max=19456, per=3.33%, avg=19194.15, stdev=256.98, samples=20 00:25:56.070 iops : min= 74, max= 76, avg=74.85, stdev= 0.93, samples=20 00:25:56.070 lat (msec) : 50=0.39%, 100=0.52%, 250=97.65%, 500=1.44% 00:25:56.070 cpu : usr=0.14%, sys=0.35%, ctx=812, majf=0, minf=1 00:25:56.070 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.070 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.070 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.070 job5: (groupid=0, jobs=1): err= 0: pid=85905: Wed Jul 24 05:15:09 2024 00:25:56.070 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(192MiB/10196msec); 0 zone resets 00:25:56.070 slat (usec): min=12, max=172, avg=66.00, stdev=15.47 00:25:56.070 clat (msec): min=9, max=391, avg=212.35, stdev=23.86 00:25:56.070 lat (msec): min=9, max=391, avg=212.41, stdev=23.87 00:25:56.070 clat percentiles (msec): 00:25:56.070 | 1.00th=[ 89], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.070 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.070 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.070 | 99.00th=[ 300], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:25:56.070 | 99.99th=[ 393] 00:25:56.070 bw ( KiB/s): min=18906, max=19968, per=3.34%, avg=19247.30, stdev=310.94, samples=20 00:25:56.070 iops : min= 73, max= 78, avg=75.00, stdev= 1.30, samples=20 00:25:56.070 lat (msec) : 10=0.13%, 20=0.13%, 50=0.26%, 100=0.52%, 250=97.52% 00:25:56.070 lat (msec) : 500=1.43% 00:25:56.070 cpu : usr=0.22%, sys=0.50%, ctx=777, majf=0, minf=1 00:25:56.070 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.070 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.070 issued rwts: total=0,767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.070 job6: (groupid=0, jobs=1): err= 0: pid=85906: Wed Jul 24 05:15:09 2024 00:25:56.070 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(192MiB/10193msec); 0 zone resets 00:25:56.070 slat (usec): min=18, max=368, avg=64.28, stdev=33.04 00:25:56.070 clat (msec): min=16, max=386, avg=212.58, stdev=22.18 00:25:56.070 lat (msec): min=16, max=386, avg=212.65, stdev=22.19 00:25:56.070 clat percentiles (msec): 00:25:56.070 | 1.00th=[ 105], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.070 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.070 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.070 | 99.00th=[ 292], 99.50th=[ 347], 99.90th=[ 388], 99.95th=[ 388], 00:25:56.070 | 99.99th=[ 388] 00:25:56.070 bw ( KiB/s): min=18906, max=19968, per=3.33%, avg=19219.80, stdev=308.75, samples=20 00:25:56.070 iops : min= 73, max= 78, avg=74.95, stdev= 1.23, samples=20 00:25:56.071 lat (msec) : 20=0.13%, 50=0.26%, 100=0.52%, 250=97.65%, 500=1.44% 00:25:56.071 cpu : usr=0.18%, sys=0.34%, ctx=804, majf=0, minf=1 00:25:56.071 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 issued rwts: total=0,766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.071 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.071 job7: (groupid=0, jobs=1): err= 0: pid=85913: Wed Jul 24 05:15:09 2024 00:25:56.071 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10189msec); 0 zone resets 00:25:56.071 slat (usec): min=23, max=342, avg=71.67, stdev=34.20 00:25:56.071 clat (msec): min=20, max=388, avg=212.77, stdev=21.68 00:25:56.071 lat (msec): min=20, max=388, avg=212.84, stdev=21.68 00:25:56.071 clat percentiles (msec): 00:25:56.071 | 1.00th=[ 113], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.071 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.071 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.071 | 99.00th=[ 296], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 388], 00:25:56.071 | 99.99th=[ 388] 00:25:56.071 bw ( KiB/s): min=18944, max=19456, per=3.33%, avg=19194.15, stdev=256.98, samples=20 00:25:56.071 iops : min= 74, max= 76, avg=74.85, stdev= 0.93, samples=20 00:25:56.071 lat (msec) : 50=0.39%, 100=0.52%, 250=97.65%, 500=1.44% 00:25:56.071 cpu : usr=0.20%, sys=0.40%, ctx=806, majf=0, minf=1 00:25:56.071 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.071 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.071 job8: (groupid=0, jobs=1): err= 0: pid=85924: Wed Jul 24 05:15:09 2024 00:25:56.071 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10177msec); 0 zone resets 00:25:56.071 slat (usec): min=18, max=179, avg=62.11, stdev=14.79 00:25:56.071 clat (msec): min=23, max=372, avg=212.55, stdev=20.31 00:25:56.071 lat (msec): min=23, max=372, avg=212.61, stdev=20.32 00:25:56.071 clat percentiles (msec): 00:25:56.071 | 1.00th=[ 116], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.071 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.071 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.071 | 99.00th=[ 279], 99.50th=[ 334], 99.90th=[ 372], 99.95th=[ 372], 00:25:56.071 | 99.99th=[ 372] 00:25:56.071 bw ( KiB/s): min=18432, max=19456, per=3.33%, avg=19192.25, stdev=307.74, samples=20 00:25:56.071 iops : min= 72, max= 76, avg=74.80, stdev= 1.20, samples=20 00:25:56.071 lat (msec) : 50=0.39%, 100=0.39%, 250=97.91%, 500=1.31% 00:25:56.071 cpu : usr=0.23%, sys=0.36%, ctx=775, majf=0, minf=1 00:25:56.071 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.071 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.071 job9: (groupid=0, jobs=1): err= 0: pid=85981: Wed Jul 24 05:15:09 2024 00:25:56.071 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10184msec); 0 zone resets 00:25:56.071 slat (usec): min=21, max=223, avg=54.55, stdev=17.63 00:25:56.071 clat (msec): min=23, max=379, avg=212.68, stdev=20.73 00:25:56.071 lat (msec): min=23, max=379, avg=212.73, stdev=20.73 00:25:56.071 clat percentiles (msec): 00:25:56.071 | 1.00th=[ 116], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.071 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.071 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.071 | 99.00th=[ 288], 99.50th=[ 338], 99.90th=[ 380], 99.95th=[ 380], 00:25:56.071 | 99.99th=[ 380] 00:25:56.071 bw ( KiB/s): min=18468, max=19968, per=3.33%, avg=19201.60, stdev=412.95, samples=20 00:25:56.071 iops : min= 72, max= 78, avg=74.95, stdev= 1.57, samples=20 00:25:56.071 lat (msec) : 50=0.39%, 100=0.39%, 250=97.91%, 500=1.31% 00:25:56.071 cpu : usr=0.21%, sys=0.28%, ctx=780, majf=0, minf=1 00:25:56.071 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.071 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.071 job10: (groupid=0, jobs=1): err= 0: pid=86023: Wed Jul 24 05:15:09 2024 00:25:56.071 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(192MiB/10193msec); 0 zone resets 00:25:56.071 slat (usec): min=19, max=221, avg=56.94, stdev=15.31 00:25:56.071 clat (msec): min=13, max=389, avg=212.59, stdev=22.61 00:25:56.071 lat (msec): min=13, max=389, avg=212.64, stdev=22.61 00:25:56.071 clat percentiles (msec): 00:25:56.071 | 1.00th=[ 103], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.071 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.071 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.071 | 99.00th=[ 296], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 388], 00:25:56.071 | 99.99th=[ 388] 00:25:56.071 bw ( KiB/s): min=18944, max=19968, per=3.33%, avg=19221.70, stdev=306.83, samples=20 00:25:56.071 iops : min= 74, max= 78, avg=75.00, stdev= 1.17, samples=20 00:25:56.071 lat (msec) : 20=0.13%, 50=0.39%, 100=0.39%, 250=97.65%, 500=1.44% 00:25:56.071 cpu : usr=0.25%, sys=0.37%, ctx=768, majf=0, minf=1 00:25:56.071 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 issued rwts: total=0,766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.071 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.071 job11: (groupid=0, jobs=1): err= 0: pid=86067: Wed Jul 24 05:15:09 2024 00:25:56.071 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(192MiB/10196msec); 0 zone resets 00:25:56.071 slat (usec): min=24, max=1370, avg=65.99, stdev=49.22 00:25:56.071 clat (msec): min=9, max=391, avg=212.34, stdev=23.90 00:25:56.071 lat (msec): min=10, max=391, avg=212.40, stdev=23.89 00:25:56.071 clat percentiles (msec): 00:25:56.071 | 1.00th=[ 89], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.071 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.071 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.071 | 99.00th=[ 300], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:25:56.071 | 99.99th=[ 393] 00:25:56.071 bw ( KiB/s): min=18906, max=19968, per=3.34%, avg=19245.40, stdev=309.46, samples=20 00:25:56.071 iops : min= 73, max= 78, avg=75.00, stdev= 1.30, samples=20 00:25:56.071 lat (msec) : 10=0.13%, 20=0.13%, 50=0.39%, 100=0.39%, 250=97.52% 00:25:56.071 lat (msec) : 500=1.43% 00:25:56.071 cpu : usr=0.26%, sys=0.43%, ctx=770, majf=0, minf=1 00:25:56.071 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 issued rwts: total=0,767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.071 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.071 job12: (groupid=0, jobs=1): err= 0: pid=86068: Wed Jul 24 05:15:09 2024 00:25:56.071 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(192MiB/10194msec); 0 zone resets 00:25:56.071 slat (usec): min=24, max=189, avg=64.03, stdev=13.36 00:25:56.071 clat (msec): min=8, max=391, avg=212.31, stdev=24.15 00:25:56.071 lat (msec): min=8, max=392, avg=212.37, stdev=24.15 00:25:56.071 clat percentiles (msec): 00:25:56.071 | 1.00th=[ 87], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.071 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.071 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.071 | 99.00th=[ 300], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:25:56.071 | 99.99th=[ 393] 00:25:56.071 bw ( KiB/s): min=18906, max=19968, per=3.34%, avg=19243.35, stdev=344.06, samples=20 00:25:56.071 iops : min= 73, max= 78, avg=74.95, stdev= 1.36, samples=20 00:25:56.071 lat (msec) : 10=0.13%, 20=0.13%, 50=0.39%, 100=0.52%, 250=97.39% 00:25:56.071 lat (msec) : 500=1.43% 00:25:56.071 cpu : usr=0.17%, sys=0.53%, ctx=770, majf=0, minf=1 00:25:56.071 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 issued rwts: total=0,767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.071 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.071 job13: (groupid=0, jobs=1): err= 0: pid=86069: Wed Jul 24 05:15:09 2024 00:25:56.071 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(192MiB/10201msec); 0 zone resets 00:25:56.071 slat (usec): min=29, max=123, avg=62.29, stdev=11.74 00:25:56.071 clat (msec): min=11, max=391, avg=212.18, stdev=24.39 00:25:56.071 lat (msec): min=11, max=391, avg=212.25, stdev=24.40 00:25:56.071 clat percentiles (msec): 00:25:56.071 | 1.00th=[ 82], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.071 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.071 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.071 | 99.00th=[ 300], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:25:56.071 | 99.99th=[ 393] 00:25:56.071 bw ( KiB/s): min=18906, max=19968, per=3.34%, avg=19274.90, stdev=345.50, samples=20 00:25:56.071 iops : min= 73, max= 78, avg=75.25, stdev= 1.41, samples=20 00:25:56.071 lat (msec) : 20=0.26%, 50=0.39%, 100=0.52%, 250=97.40%, 500=1.43% 00:25:56.071 cpu : usr=0.30%, sys=0.38%, ctx=770, majf=0, minf=1 00:25:56.071 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.071 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 issued rwts: total=0,768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.072 job14: (groupid=0, jobs=1): err= 0: pid=86070: Wed Jul 24 05:15:09 2024 00:25:56.072 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(192MiB/10193msec); 0 zone resets 00:25:56.072 slat (usec): min=22, max=139, avg=59.59, stdev=13.32 00:25:56.072 clat (msec): min=12, max=389, avg=212.57, stdev=22.78 00:25:56.072 lat (msec): min=12, max=389, avg=212.63, stdev=22.78 00:25:56.072 clat percentiles (msec): 00:25:56.072 | 1.00th=[ 102], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.072 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.072 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.072 | 99.00th=[ 296], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 388], 00:25:56.072 | 99.99th=[ 388] 00:25:56.072 bw ( KiB/s): min=18906, max=19968, per=3.33%, avg=19221.75, stdev=310.18, samples=20 00:25:56.072 iops : min= 73, max= 78, avg=75.00, stdev= 1.26, samples=20 00:25:56.072 lat (msec) : 20=0.13%, 50=0.39%, 100=0.39%, 250=97.65%, 500=1.44% 00:25:56.072 cpu : usr=0.28%, sys=0.34%, ctx=768, majf=0, minf=1 00:25:56.072 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 issued rwts: total=0,766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.072 job15: (groupid=0, jobs=1): err= 0: pid=86072: Wed Jul 24 05:15:09 2024 00:25:56.072 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10183msec); 0 zone resets 00:25:56.072 slat (usec): min=18, max=260, avg=49.10, stdev=23.40 00:25:56.072 clat (msec): min=24, max=377, avg=212.67, stdev=20.54 00:25:56.072 lat (msec): min=24, max=377, avg=212.72, stdev=20.54 00:25:56.072 clat percentiles (msec): 00:25:56.072 | 1.00th=[ 117], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.072 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.072 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.072 | 99.00th=[ 284], 99.50th=[ 338], 99.90th=[ 380], 99.95th=[ 380], 00:25:56.072 | 99.99th=[ 380] 00:25:56.072 bw ( KiB/s): min=18432, max=19968, per=3.33%, avg=19194.20, stdev=351.14, samples=20 00:25:56.072 iops : min= 72, max= 78, avg=74.85, stdev= 1.39, samples=20 00:25:56.072 lat (msec) : 50=0.26%, 100=0.52%, 250=97.91%, 500=1.31% 00:25:56.072 cpu : usr=0.25%, sys=0.19%, ctx=793, majf=0, minf=1 00:25:56.072 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.072 job16: (groupid=0, jobs=1): err= 0: pid=86076: Wed Jul 24 05:15:09 2024 00:25:56.072 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(192MiB/10196msec); 0 zone resets 00:25:56.072 slat (usec): min=27, max=5737, avg=68.93, stdev=205.57 00:25:56.072 clat (msec): min=11, max=390, avg=212.53, stdev=23.11 00:25:56.072 lat (msec): min=16, max=390, avg=212.60, stdev=23.05 00:25:56.072 clat percentiles (msec): 00:25:56.072 | 1.00th=[ 99], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.072 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.072 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.072 | 99.00th=[ 296], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:25:56.072 | 99.99th=[ 393] 00:25:56.072 bw ( KiB/s): min=18906, max=19968, per=3.33%, avg=19217.90, stdev=310.64, samples=20 00:25:56.072 iops : min= 73, max= 78, avg=74.90, stdev= 1.29, samples=20 00:25:56.072 lat (msec) : 20=0.26%, 50=0.26%, 100=0.52%, 250=97.52%, 500=1.44% 00:25:56.072 cpu : usr=0.26%, sys=0.42%, ctx=767, majf=0, minf=1 00:25:56.072 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 issued rwts: total=0,766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.072 job17: (groupid=0, jobs=1): err= 0: pid=86077: Wed Jul 24 05:15:09 2024 00:25:56.072 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10189msec); 0 zone resets 00:25:56.072 slat (usec): min=27, max=338, avg=63.56, stdev=16.25 00:25:56.072 clat (msec): min=20, max=388, avg=212.77, stdev=21.70 00:25:56.072 lat (msec): min=20, max=388, avg=212.83, stdev=21.70 00:25:56.072 clat percentiles (msec): 00:25:56.072 | 1.00th=[ 113], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.072 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.072 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.072 | 99.00th=[ 296], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 388], 00:25:56.072 | 99.99th=[ 388] 00:25:56.072 bw ( KiB/s): min=18944, max=19456, per=3.33%, avg=19194.15, stdev=256.98, samples=20 00:25:56.072 iops : min= 74, max= 76, avg=74.85, stdev= 0.93, samples=20 00:25:56.072 lat (msec) : 50=0.39%, 100=0.52%, 250=97.65%, 500=1.44% 00:25:56.072 cpu : usr=0.30%, sys=0.38%, ctx=771, majf=0, minf=1 00:25:56.072 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.072 job18: (groupid=0, jobs=1): err= 0: pid=86078: Wed Jul 24 05:15:09 2024 00:25:56.072 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(192MiB/10195msec); 0 zone resets 00:25:56.072 slat (usec): min=16, max=316, avg=50.67, stdev=18.20 00:25:56.072 clat (msec): min=16, max=385, avg=212.65, stdev=21.78 00:25:56.072 lat (msec): min=16, max=385, avg=212.70, stdev=21.78 00:25:56.072 clat percentiles (msec): 00:25:56.072 | 1.00th=[ 109], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.072 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.072 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.072 | 99.00th=[ 292], 99.50th=[ 347], 99.90th=[ 384], 99.95th=[ 384], 00:25:56.072 | 99.99th=[ 384] 00:25:56.072 bw ( KiB/s): min=18944, max=19968, per=3.33%, avg=19219.65, stdev=379.59, samples=20 00:25:56.072 iops : min= 74, max= 78, avg=74.95, stdev= 1.32, samples=20 00:25:56.072 lat (msec) : 20=0.13%, 50=0.26%, 100=0.52%, 250=97.65%, 500=1.44% 00:25:56.072 cpu : usr=0.20%, sys=0.29%, ctx=781, majf=0, minf=1 00:25:56.072 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 issued rwts: total=0,766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.072 job19: (groupid=0, jobs=1): err= 0: pid=86079: Wed Jul 24 05:15:09 2024 00:25:56.072 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(192MiB/10203msec); 0 zone resets 00:25:56.072 slat (usec): min=24, max=2998, avg=56.29, stdev=106.91 00:25:56.072 clat (msec): min=6, max=389, avg=212.19, stdev=24.31 00:25:56.072 lat (msec): min=9, max=389, avg=212.24, stdev=24.28 00:25:56.072 clat percentiles (msec): 00:25:56.072 | 1.00th=[ 83], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.072 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.072 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.072 | 99.00th=[ 296], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 388], 00:25:56.072 | 99.99th=[ 388] 00:25:56.072 bw ( KiB/s): min=18432, max=19968, per=3.34%, avg=19267.20, stdev=419.48, samples=20 00:25:56.072 iops : min= 72, max= 78, avg=75.05, stdev= 1.76, samples=20 00:25:56.072 lat (msec) : 10=0.13%, 20=0.13%, 50=0.39%, 100=0.52%, 250=97.40% 00:25:56.072 lat (msec) : 500=1.43% 00:25:56.072 cpu : usr=0.25%, sys=0.24%, ctx=771, majf=0, minf=1 00:25:56.072 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 issued rwts: total=0,768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.072 job20: (groupid=0, jobs=1): err= 0: pid=86080: Wed Jul 24 05:15:09 2024 00:25:56.072 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10185msec); 0 zone resets 00:25:56.072 slat (usec): min=28, max=534, avg=65.04, stdev=29.91 00:25:56.072 clat (msec): min=24, max=379, avg=212.67, stdev=20.69 00:25:56.072 lat (msec): min=25, max=379, avg=212.74, stdev=20.68 00:25:56.072 clat percentiles (msec): 00:25:56.072 | 1.00th=[ 117], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.072 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.072 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.072 | 99.00th=[ 288], 99.50th=[ 338], 99.90th=[ 380], 99.95th=[ 380], 00:25:56.072 | 99.99th=[ 380] 00:25:56.072 bw ( KiB/s): min=18432, max=19968, per=3.33%, avg=19199.80, stdev=416.38, samples=20 00:25:56.072 iops : min= 72, max= 78, avg=74.95, stdev= 1.57, samples=20 00:25:56.072 lat (msec) : 50=0.39%, 100=0.39%, 250=97.91%, 500=1.31% 00:25:56.072 cpu : usr=0.30%, sys=0.36%, ctx=791, majf=0, minf=1 00:25:56.072 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.072 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.072 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.072 job21: (groupid=0, jobs=1): err= 0: pid=86081: Wed Jul 24 05:15:09 2024 00:25:56.072 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(192MiB/10193msec); 0 zone resets 00:25:56.072 slat (usec): min=21, max=129, avg=61.23, stdev=12.08 00:25:56.072 clat (msec): min=13, max=389, avg=212.58, stdev=22.62 00:25:56.072 lat (msec): min=13, max=389, avg=212.64, stdev=22.63 00:25:56.072 clat percentiles (msec): 00:25:56.073 | 1.00th=[ 103], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.073 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.073 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.073 | 99.00th=[ 296], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 388], 00:25:56.073 | 99.99th=[ 388] 00:25:56.073 bw ( KiB/s): min=18944, max=19968, per=3.33%, avg=19221.70, stdev=306.83, samples=20 00:25:56.073 iops : min= 74, max= 78, avg=75.00, stdev= 1.17, samples=20 00:25:56.073 lat (msec) : 20=0.13%, 50=0.39%, 100=0.39%, 250=97.65%, 500=1.44% 00:25:56.073 cpu : usr=0.14%, sys=0.53%, ctx=768, majf=0, minf=1 00:25:56.073 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.073 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.073 issued rwts: total=0,766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.073 job22: (groupid=0, jobs=1): err= 0: pid=86082: Wed Jul 24 05:15:09 2024 00:25:56.073 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10183msec); 0 zone resets 00:25:56.073 slat (usec): min=22, max=167, avg=47.59, stdev=16.11 00:25:56.073 clat (msec): min=23, max=377, avg=212.67, stdev=20.61 00:25:56.073 lat (msec): min=24, max=378, avg=212.72, stdev=20.62 00:25:56.073 clat percentiles (msec): 00:25:56.073 | 1.00th=[ 116], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.073 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.073 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.073 | 99.00th=[ 284], 99.50th=[ 338], 99.90th=[ 380], 99.95th=[ 380], 00:25:56.073 | 99.99th=[ 380] 00:25:56.073 bw ( KiB/s): min=18468, max=19968, per=3.33%, avg=19196.00, stdev=347.09, samples=20 00:25:56.073 iops : min= 72, max= 78, avg=74.85, stdev= 1.39, samples=20 00:25:56.073 lat (msec) : 50=0.39%, 100=0.39%, 250=97.91%, 500=1.31% 00:25:56.073 cpu : usr=0.20%, sys=0.24%, ctx=786, majf=0, minf=1 00:25:56.073 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.073 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.073 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.073 job23: (groupid=0, jobs=1): err= 0: pid=86083: Wed Jul 24 05:15:09 2024 00:25:56.073 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(192MiB/10193msec); 0 zone resets 00:25:56.073 slat (usec): min=22, max=140, avg=61.28, stdev=15.38 00:25:56.073 clat (msec): min=12, max=389, avg=212.57, stdev=22.76 00:25:56.073 lat (msec): min=12, max=389, avg=212.63, stdev=22.77 00:25:56.073 clat percentiles (msec): 00:25:56.073 | 1.00th=[ 102], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.073 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.073 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.073 | 99.00th=[ 296], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 388], 00:25:56.073 | 99.99th=[ 388] 00:25:56.073 bw ( KiB/s): min=18906, max=19968, per=3.33%, avg=19221.75, stdev=310.18, samples=20 00:25:56.073 iops : min= 73, max= 78, avg=75.00, stdev= 1.26, samples=20 00:25:56.073 lat (msec) : 20=0.13%, 50=0.39%, 100=0.39%, 250=97.65%, 500=1.44% 00:25:56.073 cpu : usr=0.32%, sys=0.33%, ctx=766, majf=0, minf=1 00:25:56.073 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.073 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.073 issued rwts: total=0,766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.073 job24: (groupid=0, jobs=1): err= 0: pid=86084: Wed Jul 24 05:15:09 2024 00:25:56.073 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10177msec); 0 zone resets 00:25:56.073 slat (usec): min=18, max=159, avg=50.68, stdev=12.64 00:25:56.073 clat (msec): min=23, max=372, avg=212.56, stdev=20.36 00:25:56.073 lat (msec): min=23, max=373, avg=212.61, stdev=20.36 00:25:56.073 clat percentiles (msec): 00:25:56.073 | 1.00th=[ 116], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.073 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.073 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.073 | 99.00th=[ 279], 99.50th=[ 334], 99.90th=[ 372], 99.95th=[ 372], 00:25:56.073 | 99.99th=[ 372] 00:25:56.073 bw ( KiB/s): min=18432, max=19456, per=3.33%, avg=19192.25, stdev=307.74, samples=20 00:25:56.073 iops : min= 72, max= 76, avg=74.80, stdev= 1.20, samples=20 00:25:56.073 lat (msec) : 50=0.39%, 100=0.39%, 250=97.91%, 500=1.31% 00:25:56.073 cpu : usr=0.09%, sys=0.43%, ctx=769, majf=0, minf=1 00:25:56.073 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.073 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.073 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.073 job25: (groupid=0, jobs=1): err= 0: pid=86085: Wed Jul 24 05:15:09 2024 00:25:56.073 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(192MiB/10196msec); 0 zone resets 00:25:56.073 slat (usec): min=20, max=617, avg=53.51, stdev=31.28 00:25:56.073 clat (msec): min=15, max=385, avg=212.64, stdev=21.81 00:25:56.073 lat (msec): min=16, max=385, avg=212.69, stdev=21.81 00:25:56.073 clat percentiles (msec): 00:25:56.073 | 1.00th=[ 109], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.073 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.073 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.073 | 99.00th=[ 292], 99.50th=[ 347], 99.90th=[ 384], 99.95th=[ 384], 00:25:56.073 | 99.99th=[ 384] 00:25:56.073 bw ( KiB/s): min=18944, max=19968, per=3.33%, avg=19219.65, stdev=379.59, samples=20 00:25:56.073 iops : min= 74, max= 78, avg=74.95, stdev= 1.32, samples=20 00:25:56.073 lat (msec) : 20=0.13%, 50=0.26%, 100=0.52%, 250=97.65%, 500=1.44% 00:25:56.073 cpu : usr=0.24%, sys=0.26%, ctx=788, majf=0, minf=1 00:25:56.073 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.073 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.073 issued rwts: total=0,766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.073 job26: (groupid=0, jobs=1): err= 0: pid=86086: Wed Jul 24 05:15:09 2024 00:25:56.073 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10188msec); 0 zone resets 00:25:56.073 slat (usec): min=23, max=2527, avg=56.54, stdev=90.17 00:25:56.073 clat (msec): min=22, max=382, avg=212.73, stdev=21.08 00:25:56.073 lat (msec): min=25, max=382, avg=212.78, stdev=21.05 00:25:56.073 clat percentiles (msec): 00:25:56.073 | 1.00th=[ 115], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.073 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.073 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.073 | 99.00th=[ 292], 99.50th=[ 342], 99.90th=[ 384], 99.95th=[ 384], 00:25:56.073 | 99.99th=[ 384] 00:25:56.073 bw ( KiB/s): min=18432, max=19928, per=3.33%, avg=19192.30, stdev=352.72, samples=20 00:25:56.073 iops : min= 72, max= 77, avg=74.85, stdev= 1.39, samples=20 00:25:56.073 lat (msec) : 50=0.39%, 100=0.39%, 250=97.91%, 500=1.31% 00:25:56.073 cpu : usr=0.28%, sys=0.22%, ctx=767, majf=0, minf=1 00:25:56.073 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.073 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.073 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.073 job27: (groupid=0, jobs=1): err= 0: pid=86087: Wed Jul 24 05:15:09 2024 00:25:56.073 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10192msec); 0 zone resets 00:25:56.073 slat (usec): min=27, max=2770, avg=62.43, stdev=98.94 00:25:56.073 clat (msec): min=22, max=385, avg=212.75, stdev=21.28 00:25:56.073 lat (msec): min=25, max=385, avg=212.82, stdev=21.25 00:25:56.073 clat percentiles (msec): 00:25:56.073 | 1.00th=[ 115], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.073 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.073 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.073 | 99.00th=[ 292], 99.50th=[ 347], 99.90th=[ 384], 99.95th=[ 384], 00:25:56.073 | 99.99th=[ 384] 00:25:56.073 bw ( KiB/s): min=18432, max=19968, per=3.33%, avg=19194.15, stdev=419.89, samples=20 00:25:56.073 iops : min= 72, max= 78, avg=74.85, stdev= 1.60, samples=20 00:25:56.073 lat (msec) : 50=0.39%, 100=0.39%, 250=97.78%, 500=1.44% 00:25:56.073 cpu : usr=0.20%, sys=0.43%, ctx=772, majf=0, minf=1 00:25:56.073 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.073 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.073 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.073 job28: (groupid=0, jobs=1): err= 0: pid=86088: Wed Jul 24 05:15:09 2024 00:25:56.073 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10184msec); 0 zone resets 00:25:56.073 slat (usec): min=18, max=167, avg=51.56, stdev=15.69 00:25:56.073 clat (msec): min=24, max=379, avg=212.70, stdev=20.70 00:25:56.073 lat (msec): min=24, max=379, avg=212.75, stdev=20.71 00:25:56.073 clat percentiles (msec): 00:25:56.073 | 1.00th=[ 117], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.073 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.073 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.074 | 99.00th=[ 288], 99.50th=[ 338], 99.90th=[ 380], 99.95th=[ 380], 00:25:56.074 | 99.99th=[ 380] 00:25:56.074 bw ( KiB/s): min=18468, max=19968, per=3.33%, avg=19201.60, stdev=412.95, samples=20 00:25:56.074 iops : min= 72, max= 78, avg=74.95, stdev= 1.57, samples=20 00:25:56.074 lat (msec) : 50=0.39%, 100=0.39%, 250=97.91%, 500=1.31% 00:25:56.074 cpu : usr=0.18%, sys=0.33%, ctx=796, majf=0, minf=1 00:25:56.074 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.074 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.074 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.074 job29: (groupid=0, jobs=1): err= 0: pid=86089: Wed Jul 24 05:15:09 2024 00:25:56.074 write: IOPS=75, BW=18.8MiB/s (19.7MB/s)(191MiB/10184msec); 0 zone resets 00:25:56.074 slat (usec): min=22, max=148, avg=48.84, stdev=14.24 00:25:56.074 clat (msec): min=23, max=380, avg=212.70, stdev=20.91 00:25:56.074 lat (msec): min=23, max=380, avg=212.75, stdev=20.91 00:25:56.074 clat percentiles (msec): 00:25:56.074 | 1.00th=[ 115], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 209], 00:25:56.074 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 213], 60.00th=[ 215], 00:25:56.074 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:25:56.074 | 99.00th=[ 288], 99.50th=[ 342], 99.90th=[ 380], 99.95th=[ 380], 00:25:56.074 | 99.99th=[ 380] 00:25:56.074 bw ( KiB/s): min=18432, max=19968, per=3.33%, avg=19199.80, stdev=416.38, samples=20 00:25:56.074 iops : min= 72, max= 78, avg=74.95, stdev= 1.57, samples=20 00:25:56.074 lat (msec) : 50=0.39%, 100=0.39%, 250=97.91%, 500=1.31% 00:25:56.074 cpu : usr=0.12%, sys=0.34%, ctx=789, majf=0, minf=1 00:25:56.074 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=98.0%, 32=0.0%, >=64=0.0% 00:25:56.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.074 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.074 issued rwts: total=0,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:56.074 00:25:56.074 Run status group 0 (all jobs): 00:25:56.074 WRITE: bw=563MiB/s (590MB/s), 18.8MiB/s-18.9MiB/s (19.7MB/s-19.8MB/s), io=5745MiB (6024MB), run=10177-10204msec 00:25:56.074 00:25:56.074 Disk stats (read/write): 00:25:56.074 sda: ios=48/757, merge=0/0, ticks=214/159313, in_queue=159526, util=95.14% 00:25:56.074 sdb: ios=48/753, merge=0/0, ticks=184/158947, in_queue=159132, util=94.91% 00:25:56.074 sdc: ios=48/760, merge=0/0, ticks=201/159368, in_queue=159570, util=95.49% 00:25:56.074 sdd: ios=48/752, merge=0/0, ticks=184/158808, in_queue=158992, util=95.22% 00:25:56.074 sde: ios=42/753, merge=0/0, ticks=104/158906, in_queue=159010, util=95.26% 00:25:56.074 sdf: ios=44/757, merge=0/0, ticks=182/159436, in_queue=159618, util=95.72% 00:25:56.074 sdg: ios=38/755, merge=0/0, ticks=170/159272, in_queue=159441, util=95.69% 00:25:56.074 sdh: ios=21/753, merge=0/0, ticks=111/158945, in_queue=159056, util=95.51% 00:25:56.074 sdi: ios=14/752, merge=0/0, ticks=88/158772, in_queue=158860, util=95.32% 00:25:56.074 sdj: ios=0/752, merge=0/0, ticks=0/158819, in_queue=158819, util=95.68% 00:25:56.074 sdk: ios=0/755, merge=0/0, ticks=0/159230, in_queue=159231, util=96.04% 00:25:56.074 sdl: ios=0/757, merge=0/0, ticks=0/159418, in_queue=159418, util=96.43% 00:25:56.074 sdm: ios=0/757, merge=0/0, ticks=0/159388, in_queue=159388, util=96.58% 00:25:56.074 sdn: ios=0/758, merge=0/0, ticks=0/159518, in_queue=159518, util=96.88% 00:25:56.074 sdo: ios=0/755, merge=0/0, ticks=0/159212, in_queue=159211, util=96.82% 00:25:56.074 sdp: ios=0/752, merge=0/0, ticks=0/158754, in_queue=158754, util=96.99% 00:25:56.074 sdq: ios=0/755, merge=0/0, ticks=0/159150, in_queue=159151, util=97.31% 00:25:56.074 sdr: ios=0/753, merge=0/0, ticks=0/158950, in_queue=158950, util=97.55% 00:25:56.074 sds: ios=0/754, merge=0/0, ticks=0/159022, in_queue=159022, util=97.69% 00:25:56.074 sdt: ios=0/757, merge=0/0, ticks=0/159348, in_queue=159349, util=98.01% 00:25:56.074 sdu: ios=0/752, merge=0/0, ticks=0/158788, in_queue=158788, util=97.83% 00:25:56.074 sdv: ios=0/755, merge=0/0, ticks=0/159221, in_queue=159222, util=98.18% 00:25:56.074 sdw: ios=0/752, merge=0/0, ticks=0/158757, in_queue=158757, util=98.02% 00:25:56.074 sdx: ios=0/755, merge=0/0, ticks=0/159200, in_queue=159199, util=98.34% 00:25:56.074 sdy: ios=0/752, merge=0/0, ticks=0/158745, in_queue=158745, util=98.03% 00:25:56.074 sdz: ios=0/754, merge=0/0, ticks=0/159027, in_queue=159028, util=98.40% 00:25:56.074 sdaa: ios=0/753, merge=0/0, ticks=0/159018, in_queue=159018, util=98.45% 00:25:56.074 sdab: ios=0/753, merge=0/0, ticks=0/158990, in_queue=158991, util=98.49% 00:25:56.074 sdac: ios=0/753, merge=0/0, ticks=0/158987, in_queue=158988, util=98.57% 00:25:56.074 sdad: ios=0/753, merge=0/0, ticks=0/158982, in_queue=158982, util=98.84% 00:25:56.074 [2024-07-24 05:15:09.289473] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 [2024-07-24 05:15:09.293404] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 [2024-07-24 05:15:09.296787] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 [2024-07-24 05:15:09.300152] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 [2024-07-24 05:15:09.303709] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 [2024-07-24 05:15:09.307334] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 [2024-07-24 05:15:09.310938] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 [2024-07-24 05:15:09.314392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 05:15:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@79 -- # sync 00:25:56.074 [2024-07-24 05:15:09.318137] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 [2024-07-24 05:15:09.321693] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 [2024-07-24 05:15:09.326061] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 [2024-07-24 05:15:09.330084] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 05:15:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:25:56.074 05:15:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@83 -- # rm -f 00:25:56.074 [2024-07-24 05:15:09.333371] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 05:15:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@84 -- # iscsicleanup 00:25:56.074 Cleaning up iSCSI connection 00:25:56.074 05:15:09 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:25:56.074 05:15:09 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:25:56.074 [2024-07-24 05:15:09.339841] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 [2024-07-24 05:15:09.343245] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 [2024-07-24 05:15:09.346555] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 [2024-07-24 05:15:09.350205] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:25:56.074 Logging out of session [sid: 41, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:25:56.074 Logging out of session [sid: 42, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:25:56.074 Logging out of session [sid: 43, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:25:56.074 Logging out of session [sid: 44, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:25:56.074 Logging out of session [sid: 45, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:25:56.074 Logging out of session [sid: 46, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:25:56.074 Logging out of session [sid: 47, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 48, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 49, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 50, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 51, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 52, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 53, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 54, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 55, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 56, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 57, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 58, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 59, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 60, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 61, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 62, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 63, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 64, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 65, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 66, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 67, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 68, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 69, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:25:56.075 Logging out of session [sid: 70, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:25:56.075 Logout of [sid: 41, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 42, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 43, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 44, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 45, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 46, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 47, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 48, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 49, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 50, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 51, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 52, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 53, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 54, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 55, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 56, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 57, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 58, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 59, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 60, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 61, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 62, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 63, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 64, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 65, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 66, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 67, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 68, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 69, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:25:56.075 Logout of [sid: 70, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:25:56.075 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:25:56.075 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@983 -- # rm -rf 00:25:56.075 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@85 -- # remove_backends 00:25:56.075 INFO: Removing lvol bdevs 00:25:56.075 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@22 -- # echo 'INFO: Removing lvol bdevs' 00:25:56.075 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # seq 1 30 00:25:56.075 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:56.075 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_1 00:25:56.075 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_1 00:25:56.075 [2024-07-24 05:15:10.430896] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (86e691ae-af89-4aae-ba5d-bd4e0e7f337a) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:56.075 INFO: lvol bdev lvs0/lbd_1 removed 00:25:56.075 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_1 removed' 00:25:56.075 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:56.075 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_2 00:25:56.075 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_2 00:25:56.333 [2024-07-24 05:15:10.699017] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (9c70f62c-81a2-4582-9650-8ca1ef7c3c05) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:56.333 INFO: lvol bdev lvs0/lbd_2 removed 00:25:56.333 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_2 removed' 00:25:56.333 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:56.333 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_3 00:25:56.333 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_3 00:25:56.590 [2024-07-24 05:15:10.967123] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (382e4647-5a10-4ad0-882d-280c62e68eab) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:56.590 INFO: lvol bdev lvs0/lbd_3 removed 00:25:56.590 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_3 removed' 00:25:56.590 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:56.590 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_4 00:25:56.590 05:15:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_4 00:25:56.590 [2024-07-24 05:15:11.219197] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (e193ff87-f5e9-48f0-9abc-86f7f57675df) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:56.848 INFO: lvol bdev lvs0/lbd_4 removed 00:25:56.848 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_4 removed' 00:25:56.848 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:56.848 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_5 00:25:56.848 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_5 00:25:56.848 [2024-07-24 05:15:11.431402] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (4b7f910c-c368-486d-b982-64a3351b0dd0) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:56.848 INFO: lvol bdev lvs0/lbd_5 removed 00:25:56.848 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_5 removed' 00:25:56.848 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:56.848 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_6 00:25:56.848 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_6 00:25:57.106 [2024-07-24 05:15:11.595440] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (68e0d436-0a61-4e91-8297-a0d98bd3d771) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:57.106 INFO: lvol bdev lvs0/lbd_6 removed 00:25:57.106 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_6 removed' 00:25:57.106 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:57.106 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_7 00:25:57.106 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_7 00:25:57.364 [2024-07-24 05:15:11.767479] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (095f87bf-62c1-443f-9f00-6b1c8a415d4b) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:57.365 INFO: lvol bdev lvs0/lbd_7 removed 00:25:57.365 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_7 removed' 00:25:57.365 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:57.365 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_8 00:25:57.365 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_8 00:25:57.365 [2024-07-24 05:15:11.951618] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (850ed500-c906-46ff-ba58-727c6f8ee555) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:57.365 INFO: lvol bdev lvs0/lbd_8 removed 00:25:57.365 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_8 removed' 00:25:57.365 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:57.365 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_9 00:25:57.365 05:15:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_9 00:25:57.623 [2024-07-24 05:15:12.123680] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (03df8ffd-0d9b-405b-bfbb-e2aeab55eccd) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:57.623 INFO: lvol bdev lvs0/lbd_9 removed 00:25:57.623 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_9 removed' 00:25:57.623 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:57.623 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_10 00:25:57.623 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_10 00:25:57.882 [2024-07-24 05:15:12.375763] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (ee08d7b3-7be2-43f2-8b9e-db673dfea834) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:57.882 INFO: lvol bdev lvs0/lbd_10 removed 00:25:57.882 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_10 removed' 00:25:57.882 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:57.882 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_11 00:25:57.882 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_11 00:25:58.141 [2024-07-24 05:15:12.543848] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (6b5c87ba-9b0b-43b3-b31b-27f10bbffe35) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:58.141 INFO: lvol bdev lvs0/lbd_11 removed 00:25:58.141 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_11 removed' 00:25:58.141 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:58.141 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_12 00:25:58.141 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_12 00:25:58.141 [2024-07-24 05:15:12.703892] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (de46095c-77f0-4366-b7c8-41144b0ccaff) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:58.141 INFO: lvol bdev lvs0/lbd_12 removed 00:25:58.141 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_12 removed' 00:25:58.141 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:58.141 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_13 00:25:58.141 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_13 00:25:58.401 [2024-07-24 05:15:12.883996] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (7cdb52f1-6a80-4b23-ab5d-887505c114f6) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:58.401 INFO: lvol bdev lvs0/lbd_13 removed 00:25:58.401 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_13 removed' 00:25:58.401 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:58.401 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_14 00:25:58.401 05:15:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_14 00:25:58.660 [2024-07-24 05:15:13.064097] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b9a2b099-e6e7-4737-beef-3d7e12668955) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:58.660 INFO: lvol bdev lvs0/lbd_14 removed 00:25:58.660 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_14 removed' 00:25:58.660 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:58.660 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_15 00:25:58.660 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_15 00:25:58.924 [2024-07-24 05:15:13.328168] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b06d4374-ed25-4817-8586-828556f478f6) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:58.924 INFO: lvol bdev lvs0/lbd_15 removed 00:25:58.924 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_15 removed' 00:25:58.924 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:58.924 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_16 00:25:58.924 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_16 00:25:59.184 [2024-07-24 05:15:13.564282] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (bf60e6af-a695-4770-9a0b-c86c633f6b2b) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:59.184 INFO: lvol bdev lvs0/lbd_16 removed 00:25:59.184 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_16 removed' 00:25:59.184 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:59.184 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_17 00:25:59.184 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_17 00:25:59.184 [2024-07-24 05:15:13.728345] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b3e4856a-b323-4595-9466-871f7517b2df) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:59.184 INFO: lvol bdev lvs0/lbd_17 removed 00:25:59.184 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_17 removed' 00:25:59.184 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:59.184 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_18 00:25:59.184 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_18 00:25:59.443 [2024-07-24 05:15:13.912394] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (62092dba-01c7-4cf2-91bc-f3333d56e1dd) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:59.443 INFO: lvol bdev lvs0/lbd_18 removed 00:25:59.443 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_18 removed' 00:25:59.443 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:59.443 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_19 00:25:59.443 05:15:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_19 00:25:59.702 [2024-07-24 05:15:14.076486] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (780942ce-1a2c-405e-9022-d1030f6236ff) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:59.702 INFO: lvol bdev lvs0/lbd_19 removed 00:25:59.702 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_19 removed' 00:25:59.702 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:59.702 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_20 00:25:59.702 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_20 00:25:59.702 [2024-07-24 05:15:14.256662] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (c2378b34-dae2-471c-afda-e65ccb749af5) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:59.702 INFO: lvol bdev lvs0/lbd_20 removed 00:25:59.702 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_20 removed' 00:25:59.702 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:59.702 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_21 00:25:59.702 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_21 00:25:59.960 [2024-07-24 05:15:14.424738] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (8edfffff-ebed-4385-9919-ff6266d92354) received event(SPDK_BDEV_EVENT_REMOVE) 00:25:59.960 INFO: lvol bdev lvs0/lbd_21 removed 00:25:59.960 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_21 removed' 00:25:59.960 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:25:59.960 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_22 00:25:59.960 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_22 00:26:00.219 [2024-07-24 05:15:14.600814] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (9f5b07be-7c3f-4df0-a3f4-71249c7873fe) received event(SPDK_BDEV_EVENT_REMOVE) 00:26:00.219 INFO: lvol bdev lvs0/lbd_22 removed 00:26:00.219 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_22 removed' 00:26:00.219 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:26:00.219 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_23 00:26:00.219 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_23 00:26:00.219 [2024-07-24 05:15:14.776854] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b699bd39-f59b-4353-83f0-4bfe7453dc8b) received event(SPDK_BDEV_EVENT_REMOVE) 00:26:00.219 INFO: lvol bdev lvs0/lbd_23 removed 00:26:00.219 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_23 removed' 00:26:00.219 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:26:00.219 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_24 00:26:00.219 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_24 00:26:00.477 [2024-07-24 05:15:14.957012] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (aaf58efe-c52a-4774-bc64-23339e6c03b2) received event(SPDK_BDEV_EVENT_REMOVE) 00:26:00.477 INFO: lvol bdev lvs0/lbd_24 removed 00:26:00.477 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_24 removed' 00:26:00.477 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:26:00.477 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_25 00:26:00.477 05:15:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_25 00:26:00.735 [2024-07-24 05:15:15.121086] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (91b79db1-a04d-4ea8-8b80-232aac055bd4) received event(SPDK_BDEV_EVENT_REMOVE) 00:26:00.735 INFO: lvol bdev lvs0/lbd_25 removed 00:26:00.735 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_25 removed' 00:26:00.735 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:26:00.735 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_26 00:26:00.735 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_26 00:26:00.735 [2024-07-24 05:15:15.297130] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (1e60c176-863d-47e6-b966-2680acf105c9) received event(SPDK_BDEV_EVENT_REMOVE) 00:26:00.735 INFO: lvol bdev lvs0/lbd_26 removed 00:26:00.735 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_26 removed' 00:26:00.735 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:26:00.735 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_27 00:26:00.735 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_27 00:26:00.994 [2024-07-24 05:15:15.481211] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (2348a6ae-7ee8-4c19-80ef-918e2d5f6ea1) received event(SPDK_BDEV_EVENT_REMOVE) 00:26:00.994 INFO: lvol bdev lvs0/lbd_27 removed 00:26:00.994 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_27 removed' 00:26:00.994 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:26:00.994 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_28 00:26:00.994 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_28 00:26:01.252 [2024-07-24 05:15:15.661368] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0e49d408-d70a-4652-aa73-571c30ff6c00) received event(SPDK_BDEV_EVENT_REMOVE) 00:26:01.252 INFO: lvol bdev lvs0/lbd_28 removed 00:26:01.252 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_28 removed' 00:26:01.252 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:26:01.252 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_29 00:26:01.252 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_29 00:26:01.252 [2024-07-24 05:15:15.825428] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (e81eb61e-f80a-463c-a3ba-2d21cff6f396) received event(SPDK_BDEV_EVENT_REMOVE) 00:26:01.252 INFO: lvol bdev lvs0/lbd_29 removed 00:26:01.252 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_29 removed' 00:26:01.252 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:26:01.252 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_30 00:26:01.252 05:15:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_30 00:26:01.511 [2024-07-24 05:15:16.061555] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (742e26ff-7670-4155-97ed-272893c544f4) received event(SPDK_BDEV_EVENT_REMOVE) 00:26:01.511 INFO: lvol bdev lvs0/lbd_30 removed 00:26:01.511 05:15:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_30 removed' 00:26:01.511 05:15:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@28 -- # sleep 1 00:26:02.889 INFO: Removing lvol stores 00:26:02.889 05:15:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@30 -- # echo 'INFO: Removing lvol stores' 00:26:02.889 05:15:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:26:02.889 INFO: lvol store lvs0 removed 00:26:02.889 INFO: Removing NVMe 00:26:02.889 05:15:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@32 -- # echo 'INFO: lvol store lvs0 removed' 00:26:02.889 05:15:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@34 -- # echo 'INFO: Removing NVMe' 00:26:02.889 05:15:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:26:04.817 05:15:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@37 -- # return 0 00:26:04.817 05:15:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@86 -- # killprocess 84237 00:26:04.817 05:15:18 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 84237 ']' 00:26:04.817 05:15:18 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@952 -- # kill -0 84237 00:26:04.817 05:15:18 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@953 -- # uname 00:26:04.817 05:15:18 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:04.817 05:15:18 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84237 00:26:04.817 killing process with pid 84237 00:26:04.817 05:15:18 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:04.817 05:15:18 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:04.817 05:15:18 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84237' 00:26:04.817 05:15:18 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@967 -- # kill 84237 00:26:04.817 05:15:18 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@972 -- # wait 84237 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@87 -- # iscsitestfini 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:26:07.351 00:26:07.351 real 0m48.553s 00:26:07.351 user 0m55.866s 00:26:07.351 sys 0m15.089s 00:26:07.351 ************************************ 00:26:07.351 END TEST iscsi_tgt_multiconnection 00:26:07.351 ************************************ 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.351 05:15:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@46 -- # '[' 1 -eq 1 ']' 00:26:07.351 05:15:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@47 -- # run_test iscsi_tgt_ext4test /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test/ext4test.sh 00:26:07.351 05:15:21 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:07.351 05:15:21 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:07.351 05:15:21 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:26:07.351 ************************************ 00:26:07.351 START TEST iscsi_tgt_ext4test 00:26:07.351 ************************************ 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test/ext4test.sh 00:26:07.351 * Looking for test storage... 00:26:07.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@24 -- # iscsitestinit 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@28 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@29 -- # node_base=iqn.2013-06.com.intel.ch.spdk 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@31 -- # timing_enter start_iscsi_tgt 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@34 -- # pid=86636 00:26:07.351 Process pid: 86636 00:26:07.351 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@35 -- # echo 'Process pid: 86636' 00:26:07.352 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@37 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:07.352 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@33 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:26:07.352 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@39 -- # waitforlisten 86636 00:26:07.352 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@829 -- # '[' -z 86636 ']' 00:26:07.352 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.352 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:07.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.352 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.352 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:07.352 05:15:21 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:26:07.352 [2024-07-24 05:15:21.670083] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:26:07.352 [2024-07-24 05:15:21.670251] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86636 ] 00:26:07.352 [2024-07-24 05:15:21.852688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.610 [2024-07-24 05:15:22.074138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.869 05:15:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:07.869 05:15:22 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@862 -- # return 0 00:26:07.869 05:15:22 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 4 -b iqn.2013-06.com.intel.ch.spdk 00:26:08.128 05:15:22 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:26:08.694 [2024-07-24 05:15:23.080108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:09.262 05:15:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:09.262 05:15:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:26:09.520 05:15:24 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 512 4096 --name Malloc0 00:26:10.455 Malloc0 00:26:10.455 iscsi_tgt is listening. Running tests... 00:26:10.455 05:15:24 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@44 -- # echo 'iscsi_tgt is listening. Running tests...' 00:26:10.455 05:15:24 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@46 -- # timing_exit start_iscsi_tgt 00:26:10.455 05:15:24 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:10.455 05:15:24 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:26:10.455 05:15:24 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:26:10.714 05:15:25 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:26:10.973 05:15:25 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_create Malloc0 00:26:11.232 true 00:26:11.232 05:15:25 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target0 Target0_alias EE_Malloc0:0 1:2 64 -d 00:26:11.232 05:15:25 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@55 -- # sleep 1 00:26:12.609 05:15:26 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@57 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:26:12.609 10.0.0.1:3260,1 iqn.2013-06.com.intel.ch.spdk:Target0 00:26:12.609 05:15:26 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@58 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:26:12.609 Logging in to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] 00:26:12.609 Login to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:26:12.609 05:15:26 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@59 -- # waitforiscsidevices 1 00:26:12.609 05:15:26 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@116 -- # local num=1 00:26:12.609 05:15:26 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:26:12.609 05:15:26 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:26:12.609 05:15:26 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:26:12.609 05:15:26 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:26:12.609 [2024-07-24 05:15:26.871783] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:12.609 05:15:26 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=1 00:26:12.609 05:15:26 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:26:12.609 05:15:26 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@123 -- # return 0 00:26:12.609 Test error injection 00:26:12.609 05:15:26 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@61 -- # echo 'Test error injection' 00:26:12.609 05:15:26 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_inject_error EE_Malloc0 all failure -n 1000 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # grep 'Attached scsi disk' 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # iscsiadm -m session -P 3 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # head -n1 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # awk '{print $4}' 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # dev=sda 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@65 -- # waitforfile /dev/sda 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1263 -- # local i=0 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1264 -- # '[' '!' -e /dev/sda ']' 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1270 -- # '[' '!' -e /dev/sda ']' 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1274 -- # return 0 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@66 -- # make_filesystem ext4 /dev/sda 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@924 -- # local fstype=ext4 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@926 -- # local i=0 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@927 -- # local force 00:26:12.609 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:26:12.610 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@930 -- # force=-F 00:26:12.610 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:12.610 mke2fs 1.46.5 (30-Dec-2021) 00:26:13.127 Discarding device blocks: 0/131072 done 00:26:13.127 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:13.127 Filesystem UUID: c05e7c71-1d86-4f27-bd72-144d81bdcf82 00:26:13.127 Superblock backups stored on blocks: 00:26:13.127 32768, 98304 00:26:13.127 00:26:13.127 Allocating group tables: 0/4 Warning: could not erase sector 2: Input/output error 00:26:13.127 done 00:26:13.127 Warning: could not read block 0: Input/output error 00:26:13.385 Warning: could not erase sector 0: Input/output error 00:26:13.385 Writing inode tables: 0/4 done 00:26:13.385 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:26:13.385 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 0 -ge 15 ']' 00:26:13.385 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=1 00:26:13.385 [2024-07-24 05:15:27.932193] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:13.385 05:15:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:14.321 05:15:28 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:14.321 mke2fs 1.46.5 (30-Dec-2021) 00:26:14.579 Discarding device blocks: 0/131072 done 00:26:14.837 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:14.837 Filesystem UUID: 62f4cb47-6276-427d-a980-051f3c1ea944 00:26:14.837 Superblock backups stored on blocks: 00:26:14.837 32768, 98304 00:26:14.837 00:26:14.837 Allocating group tables: 0/4 done 00:26:14.837 Warning: could not erase sector 2: Input/output error 00:26:14.837 Warning: could not read block 0: Input/output error 00:26:14.837 Warning: could not erase sector 0: Input/output error 00:26:14.837 Writing inode tables: 0/4 done 00:26:15.095 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:26:15.095 05:15:29 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 1 -ge 15 ']' 00:26:15.095 05:15:29 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=2 00:26:15.095 [2024-07-24 05:15:29.513144] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:15.095 05:15:29 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:16.030 05:15:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:16.030 mke2fs 1.46.5 (30-Dec-2021) 00:26:16.289 Discarding device blocks: 0/131072 done 00:26:16.289 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:16.289 Filesystem UUID: 14207c16-1fdb-48ec-93e5-9c43ed000bf9 00:26:16.289 Superblock backups stored on blocks: 00:26:16.289 32768, 98304 00:26:16.289 00:26:16.289 Allocating group tables: 0/4 done 00:26:16.289 Warning: could not erase sector 2: Input/output error 00:26:16.548 Warning: could not read block 0: Input/output error 00:26:16.548 Warning: could not erase sector 0: Input/output error 00:26:16.548 Writing inode tables: 0/4 done 00:26:16.548 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:26:16.548 05:15:31 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 2 -ge 15 ']' 00:26:16.548 05:15:31 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=3 00:26:16.548 05:15:31 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:16.548 [2024-07-24 05:15:31.095410] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:17.483 05:15:32 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:17.483 mke2fs 1.46.5 (30-Dec-2021) 00:26:18.000 Discarding device blocks: 0/131072 done 00:26:18.000 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:18.000 Filesystem UUID: 286f4acc-f077-42a7-84cc-35c27666703c 00:26:18.000 Superblock backups stored on blocks: 00:26:18.000 32768, 98304 00:26:18.000 00:26:18.000 Allocating group tables: 0/4 done 00:26:18.000 Warning: could not erase sector 2: Input/output error 00:26:18.000 Warning: could not read block 0: Input/output error 00:26:18.258 Warning: could not erase sector 0: Input/output error 00:26:18.258 Writing inode tables: 0/4 done 00:26:18.258 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:26:18.258 05:15:32 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 3 -ge 15 ']' 00:26:18.258 [2024-07-24 05:15:32.798182] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:18.258 05:15:32 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=4 00:26:18.258 05:15:32 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:19.193 05:15:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:19.193 mke2fs 1.46.5 (30-Dec-2021) 00:26:19.452 Discarding device blocks: 0/131072 done 00:26:19.711 Warning: could not erase sector 2: Input/output error 00:26:19.711 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:19.711 Filesystem UUID: 2d410454-0c65-42e3-aa55-ac3c510c2526 00:26:19.711 Superblock backups stored on blocks: 00:26:19.711 32768, 98304 00:26:19.711 00:26:19.711 Allocating group tables: 0/4 done 00:26:19.711 Warning: could not read block 0: Input/output error 00:26:19.711 Warning: could not erase sector 0: Input/output error 00:26:19.711 Writing inode tables: 0/4 done 00:26:19.970 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:26:19.970 05:15:34 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 4 -ge 15 ']' 00:26:19.970 05:15:34 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=5 00:26:19.970 05:15:34 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:19.970 [2024-07-24 05:15:34.377131] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:20.907 05:15:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:20.907 mke2fs 1.46.5 (30-Dec-2021) 00:26:21.166 Discarding device blocks: 0/131072 done 00:26:21.166 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:21.166 Filesystem UUID: a08a5209-28a1-4b60-abfe-fe5c942d868a 00:26:21.166 Superblock backups stored on blocks: 00:26:21.166 32768, 98304 00:26:21.166 00:26:21.166 Allocating group tables: 0/4 done 00:26:21.166 Warning: could not erase sector 2: Input/output error 00:26:21.166 Warning: could not read block 0: Input/output error 00:26:21.425 Warning: could not erase sector 0: Input/output error 00:26:21.425 Writing inode tables: 0/4 done 00:26:21.425 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:26:21.425 05:15:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 5 -ge 15 ']' 00:26:21.425 05:15:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=6 00:26:21.425 [2024-07-24 05:15:35.959356] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:21.425 05:15:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:22.361 05:15:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:22.361 mke2fs 1.46.5 (30-Dec-2021) 00:26:22.620 Discarding device blocks: 0/131072 done 00:26:22.879 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:22.879 Filesystem UUID: cbea0ab8-aa74-4ce2-b5af-be3f29ecaee7 00:26:22.879 Superblock backups stored on blocks: 00:26:22.879 32768, 98304 00:26:22.879 00:26:22.879 Allocating group tables: 0/4 done 00:26:22.879 Warning: could not erase sector 2: Input/output error 00:26:22.879 Warning: could not read block 0: Input/output error 00:26:23.138 Warning: could not erase sector 0: Input/output error 00:26:23.138 Writing inode tables: 0/4 done 00:26:23.138 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:26:23.138 05:15:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 6 -ge 15 ']' 00:26:23.138 05:15:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=7 00:26:23.138 05:15:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:23.138 [2024-07-24 05:15:37.648355] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:24.075 05:15:38 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:24.075 mke2fs 1.46.5 (30-Dec-2021) 00:26:24.334 Discarding device blocks: 0/131072 done 00:26:24.593 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:24.593 Filesystem UUID: 32051bff-a97e-41e8-abcc-f9bd186fd3f0 00:26:24.593 Superblock backups stored on blocks: 00:26:24.593 32768, 98304 00:26:24.593 00:26:24.593 Allocating group tables: 0/4Warning: could not erase sector 2: Input/output error 00:26:24.593  done 00:26:24.593 Warning: could not read block 0: Input/output error 00:26:24.593 Warning: could not erase sector 0: Input/output error 00:26:24.593 Writing inode tables: 0/4 done 00:26:24.593 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:26:24.852 05:15:39 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 7 -ge 15 ']' 00:26:24.852 05:15:39 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=8 00:26:24.852 05:15:39 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:24.852 [2024-07-24 05:15:39.226981] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:25.790 05:15:40 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:25.790 mke2fs 1.46.5 (30-Dec-2021) 00:26:26.049 Discarding device blocks: 0/131072 done 00:26:26.049 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:26.049 Filesystem UUID: 35ca218a-0f7a-4b6c-9c6d-c88a6f1cfab2 00:26:26.049 Superblock backups stored on blocks: 00:26:26.049 32768, 98304 00:26:26.049 00:26:26.049 Allocating group tables: 0/4 done 00:26:26.049 Warning: could not erase sector 2: Input/output error 00:26:26.049 Warning: could not read block 0: Input/output error 00:26:26.308 Warning: could not erase sector 0: Input/output error 00:26:26.308 Writing inode tables: 0/4 done 00:26:26.308 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:26:26.308 05:15:40 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 8 -ge 15 ']' 00:26:26.308 05:15:40 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=9 00:26:26.308 05:15:40 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:26.308 [2024-07-24 05:15:40.805167] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:27.245 05:15:41 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:27.245 mke2fs 1.46.5 (30-Dec-2021) 00:26:27.504 Discarding device blocks: 0/131072 done 00:26:27.504 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:27.504 Filesystem UUID: a213f39d-206e-4210-8744-928c09f4a481 00:26:27.504 Superblock backups stored on blocks: 00:26:27.504 32768, 98304 00:26:27.504 00:26:27.504 Allocating group tables: 0/4 done 00:26:27.504 Writing inode tables: 0/4 done 00:26:27.504 Creating journal (4096 blocks): done 00:26:27.504 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:26:27.504 05:15:42 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 9 -ge 15 ']' 00:26:27.504 [2024-07-24 05:15:42.081580] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:27.504 05:15:42 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=10 00:26:27.504 05:15:42 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:28.882 05:15:43 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:28.882 mke2fs 1.46.5 (30-Dec-2021) 00:26:28.882 Discarding device blocks: 0/131072 done 00:26:28.882 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:28.882 Filesystem UUID: 11037336-3380-44d8-acdf-b67b6b4d21f9 00:26:28.882 Superblock backups stored on blocks: 00:26:28.882 32768, 98304 00:26:28.882 00:26:28.882 Allocating group tables: 0/4 done 00:26:28.882 Writing inode tables: 0/4 done 00:26:28.882 Creating journal (4096 blocks): done 00:26:28.882 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:26:28.882 05:15:43 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 10 -ge 15 ']' 00:26:28.882 [2024-07-24 05:15:43.397761] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:28.882 05:15:43 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=11 00:26:28.882 05:15:43 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:29.816 05:15:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:29.816 mke2fs 1.46.5 (30-Dec-2021) 00:26:30.074 Discarding device blocks: 0/131072 done 00:26:30.074 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:30.074 Filesystem UUID: 94e52151-5491-4288-958d-627056bb9af7 00:26:30.074 Superblock backups stored on blocks: 00:26:30.074 32768, 98304 00:26:30.074 00:26:30.074 Allocating group tables: 0/4 done 00:26:30.074 Writing inode tables: 0/4 done 00:26:30.074 Creating journal (4096 blocks): done 00:26:30.075 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:26:30.333 05:15:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 11 -ge 15 ']' 00:26:30.333 05:15:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=12 00:26:30.333 05:15:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:31.269 05:15:45 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:31.269 mke2fs 1.46.5 (30-Dec-2021) 00:26:31.528 Discarding device blocks: 0/131072 done 00:26:31.528 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:31.528 Filesystem UUID: eed0813f-4636-4b11-b586-5330c924644c 00:26:31.528 Superblock backups stored on blocks: 00:26:31.528 32768, 98304 00:26:31.528 00:26:31.528 Allocating group tables: 0/4 done 00:26:31.528 Writing inode tables: 0/4 done 00:26:31.528 Creating journal (4096 blocks): done 00:26:31.528 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:26:31.528 05:15:46 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 12 -ge 15 ']' 00:26:31.528 05:15:46 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=13 00:26:31.528 05:15:46 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:32.463 05:15:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:32.463 mke2fs 1.46.5 (30-Dec-2021) 00:26:32.722 Discarding device blocks: 0/131072 done 00:26:32.722 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:32.722 Filesystem UUID: 68538ff0-6130-4e55-82d2-e74f9f405de7 00:26:32.722 Superblock backups stored on blocks: 00:26:32.722 32768, 98304 00:26:32.722 00:26:32.722 Allocating group tables: 0/4 done 00:26:32.722 Writing inode tables: 0/4 done 00:26:32.722 Creating journal (4096 blocks): done 00:26:32.722 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:26:32.722 05:15:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 13 -ge 15 ']' 00:26:32.722 05:15:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=14 00:26:32.722 05:15:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:32.722 [2024-07-24 05:15:47.310560] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:34.098 05:15:48 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:34.098 mke2fs 1.46.5 (30-Dec-2021) 00:26:34.098 Discarding device blocks: 0/131072 done 00:26:34.098 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:34.098 Filesystem UUID: 28339ace-f295-4071-9084-ce6a704f9fc5 00:26:34.098 Superblock backups stored on blocks: 00:26:34.098 32768, 98304 00:26:34.098 00:26:34.098 Allocating group tables: 0/4 done 00:26:34.098 Writing inode tables: 0/4 done 00:26:34.098 Creating journal (4096 blocks): done 00:26:34.098 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:26:34.098 05:15:48 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 14 -ge 15 ']' 00:26:34.098 [2024-07-24 05:15:48.622791] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:34.098 05:15:48 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=15 00:26:34.098 05:15:48 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:26:35.035 05:15:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:35.035 mke2fs 1.46.5 (30-Dec-2021) 00:26:35.293 Discarding device blocks: 0/131072 done 00:26:35.293 Creating filesystem with 131072 4k blocks and 32768 inodes 00:26:35.293 Filesystem UUID: d53d680c-f64a-4d59-a62a-9d5e9e1ca60a 00:26:35.293 Superblock backups stored on blocks: 00:26:35.293 32768, 98304 00:26:35.293 00:26:35.293 Allocating group tables: 0/4 done 00:26:35.293 Writing inode tables: 0/4 done 00:26:35.293 Creating journal (4096 blocks): done 00:26:35.559 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:26:35.559 05:15:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 15 -ge 15 ']' 00:26:35.559 mkfs failed as expected 00:26:35.559 Cleaning up iSCSI connection 00:26:35.559 05:15:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # return 1 00:26:35.559 [2024-07-24 05:15:49.941589] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:35.559 05:15:49 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@70 -- # echo 'mkfs failed as expected' 00:26:35.559 05:15:49 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@73 -- # iscsicleanup 00:26:35.559 05:15:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:26:35.559 05:15:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:26:35.559 Logging out of session [sid: 71, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] 00:26:35.559 Logout of [sid: 71, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:26:35.559 05:15:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:26:35.559 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@983 -- # rm -rf 00:26:35.559 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_inject_error EE_Malloc0 clear failure 00:26:35.842 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2013-06.com.intel.ch.spdk:Target0 00:26:36.102 Error injection test done 00:26:36.102 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@76 -- # echo 'Error injection test done' 00:26:36.102 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@78 -- # get_bdev_size Nvme0n1 00:26:36.102 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1376 -- # local bdev_name=Nvme0n1 00:26:36.102 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1377 -- # local bdev_info 00:26:36.102 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1378 -- # local bs 00:26:36.102 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1379 -- # local nb 00:26:36.102 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 00:26:36.360 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:26:36.360 { 00:26:36.360 "name": "Nvme0n1", 00:26:36.360 "aliases": [ 00:26:36.360 "58713499-a09f-4e43-8c8e-e88bbe82cb76" 00:26:36.360 ], 00:26:36.360 "product_name": "NVMe disk", 00:26:36.360 "block_size": 4096, 00:26:36.360 "num_blocks": 1310720, 00:26:36.360 "uuid": "58713499-a09f-4e43-8c8e-e88bbe82cb76", 00:26:36.360 "assigned_rate_limits": { 00:26:36.360 "rw_ios_per_sec": 0, 00:26:36.360 "rw_mbytes_per_sec": 0, 00:26:36.360 "r_mbytes_per_sec": 0, 00:26:36.360 "w_mbytes_per_sec": 0 00:26:36.360 }, 00:26:36.360 "claimed": false, 00:26:36.360 "zoned": false, 00:26:36.360 "supported_io_types": { 00:26:36.360 "read": true, 00:26:36.360 "write": true, 00:26:36.360 "unmap": true, 00:26:36.360 "flush": true, 00:26:36.360 "reset": true, 00:26:36.360 "nvme_admin": true, 00:26:36.360 "nvme_io": true, 00:26:36.360 "nvme_io_md": false, 00:26:36.360 "write_zeroes": true, 00:26:36.360 "zcopy": false, 00:26:36.360 "get_zone_info": false, 00:26:36.360 "zone_management": false, 00:26:36.360 "zone_append": false, 00:26:36.360 "compare": true, 00:26:36.360 "compare_and_write": false, 00:26:36.360 "abort": true, 00:26:36.360 "seek_hole": false, 00:26:36.360 "seek_data": false, 00:26:36.360 "copy": true, 00:26:36.360 "nvme_iov_md": false 00:26:36.360 }, 00:26:36.360 "driver_specific": { 00:26:36.360 "nvme": [ 00:26:36.360 { 00:26:36.360 "pci_address": "0000:00:10.0", 00:26:36.360 "trid": { 00:26:36.360 "trtype": "PCIe", 00:26:36.360 "traddr": "0000:00:10.0" 00:26:36.360 }, 00:26:36.360 "ctrlr_data": { 00:26:36.360 "cntlid": 0, 00:26:36.360 "vendor_id": "0x1b36", 00:26:36.360 "model_number": "QEMU NVMe Ctrl", 00:26:36.360 "serial_number": "12340", 00:26:36.360 "firmware_revision": "8.0.0", 00:26:36.360 "subnqn": "nqn.2019-08.org.qemu:12340", 00:26:36.360 "oacs": { 00:26:36.360 "security": 0, 00:26:36.360 "format": 1, 00:26:36.360 "firmware": 0, 00:26:36.360 "ns_manage": 1 00:26:36.360 }, 00:26:36.360 "multi_ctrlr": false, 00:26:36.360 "ana_reporting": false 00:26:36.360 }, 00:26:36.360 "vs": { 00:26:36.360 "nvme_version": "1.4" 00:26:36.360 }, 00:26:36.360 "ns_data": { 00:26:36.360 "id": 1, 00:26:36.360 "can_share": false 00:26:36.360 } 00:26:36.360 } 00:26:36.360 ], 00:26:36.360 "mp_policy": "active_passive" 00:26:36.360 } 00:26:36.360 } 00:26:36.360 ]' 00:26:36.360 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:26:36.360 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1381 -- # bs=4096 00:26:36.360 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:26:36.360 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1382 -- # nb=1310720 00:26:36.360 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1385 -- # bdev_size=5120 00:26:36.360 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1386 -- # echo 5120 00:26:36.360 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@78 -- # bdev_size=5120 00:26:36.361 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@79 -- # split_size=2560 00:26:36.361 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@80 -- # split_size=2560 00:26:36.361 05:15:50 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create Nvme0n1 2 -s 2560 00:26:36.619 Nvme0n1p0 Nvme0n1p1 00:26:36.619 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias Nvme0n1p0:0 1:2 64 -d 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@84 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:26:36.877 10.0.0.1:3260,1 iqn.2013-06.com.intel.ch.spdk:Target1 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@85 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:26:36.877 Logging in to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] 00:26:36.877 Login to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@86 -- # waitforiscsidevices 1 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@116 -- # local num=1 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:26:36.877 [2024-07-24 05:15:51.364101] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=1 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@123 -- # return 0 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # iscsiadm -m session -P 3 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # grep 'Attached scsi disk' 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # awk '{print $4}' 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # head -n1 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # dev=sda 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@89 -- # waitforfile /dev/sda 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1263 -- # local i=0 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1264 -- # '[' '!' -e /dev/sda ']' 00:26:36.877 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1270 -- # '[' '!' -e /dev/sda ']' 00:26:36.878 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1274 -- # return 0 00:26:36.878 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@91 -- # make_filesystem ext4 /dev/sda 00:26:36.878 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@924 -- # local fstype=ext4 00:26:36.878 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda 00:26:36.878 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@926 -- # local i=0 00:26:36.878 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@927 -- # local force 00:26:36.878 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:26:36.878 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@930 -- # force=-F 00:26:36.878 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:26:36.878 mke2fs 1.46.5 (30-Dec-2021) 00:26:36.878 Discarding device blocks: 0/655360 done 00:26:36.878 Creating filesystem with 655360 4k blocks and 163840 inodes 00:26:36.878 Filesystem UUID: 889e79b5-6a8a-4bef-881c-015ed84117e4 00:26:36.878 Superblock backups stored on blocks: 00:26:36.878 32768, 98304, 163840, 229376, 294912 00:26:36.878 00:26:36.878 Allocating group tables: 0/20 done 00:26:36.878 Writing inode tables: 0/20 done 00:26:37.135 Creating journal (16384 blocks): done 00:26:37.393 Writing superblocks and filesystem accounting information: 0/20 done 00:26:37.393 00:26:37.393 [2024-07-24 05:15:51.775427] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:26:37.393 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@943 -- # return 0 00:26:37.393 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@92 -- # mkdir -p /mnt/sdadir 00:26:37.393 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@93 -- # mount -o sync /dev/sda /mnt/sdadir 00:26:37.393 05:15:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@95 -- # rsync -qav --exclude=.git '--exclude=*.o' /home/vagrant/spdk_repo/spdk/ /mnt/sdadir/spdk 00:27:45.102 05:16:58 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@97 -- # make -C /mnt/sdadir/spdk clean 00:27:45.102 make: Entering directory '/mnt/sdadir/spdk' 00:28:41.372 make[1]: Nothing to be done for 'clean'. 00:28:41.372 make: Leaving directory '/mnt/sdadir/spdk' 00:28:41.372 05:17:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@98 -- # cd /mnt/sdadir/spdk 00:28:41.372 05:17:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@98 -- # ./configure --disable-unit-tests --disable-tests 00:28:41.372 Using default SPDK env in /mnt/sdadir/spdk/lib/env_dpdk 00:28:41.372 Using default DPDK in /mnt/sdadir/spdk/dpdk/build 00:29:03.304 Configuring ISA-L (logfile: /mnt/sdadir/spdk/.spdk-isal.log)...done. 00:29:21.383 Configuring ISA-L-crypto (logfile: /mnt/sdadir/spdk/.spdk-isal-crypto.log)...done. 00:29:21.951 Creating mk/config.mk...done. 00:29:21.951 Creating mk/cc.flags.mk...done. 00:29:21.951 Type 'make' to build. 00:29:21.951 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@99 -- # make -C /mnt/sdadir/spdk -j 00:29:21.951 make: Entering directory '/mnt/sdadir/spdk' 00:29:22.518 make[1]: Nothing to be done for 'all'. 00:29:49.053 The Meson build system 00:29:49.053 Version: 1.3.1 00:29:49.053 Source dir: /mnt/sdadir/spdk/dpdk 00:29:49.053 Build dir: /mnt/sdadir/spdk/dpdk/build-tmp 00:29:49.053 Build type: native build 00:29:49.053 Program cat found: YES (/usr/bin/cat) 00:29:49.053 Project name: DPDK 00:29:49.053 Project version: 24.03.0 00:29:49.053 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:29:49.053 C linker for the host machine: cc ld.bfd 2.39-16 00:29:49.053 Host machine cpu family: x86_64 00:29:49.053 Host machine cpu: x86_64 00:29:49.053 Program pkg-config found: YES (/usr/bin/pkg-config) 00:29:49.053 Program check-symbols.sh found: YES (/mnt/sdadir/spdk/dpdk/buildtools/check-symbols.sh) 00:29:49.053 Program options-ibverbs-static.sh found: YES (/mnt/sdadir/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:29:49.053 Program python3 found: YES (/usr/bin/python3) 00:29:49.053 Program cat found: YES (/usr/bin/cat) 00:29:49.053 Compiler for C supports arguments -march=native: YES 00:29:49.053 Checking for size of "void *" : 8 00:29:49.053 Checking for size of "void *" : 8 (cached) 00:29:49.053 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:29:49.053 Library m found: YES 00:29:49.053 Library numa found: YES 00:29:49.053 Has header "numaif.h" : YES 00:29:49.053 Library fdt found: NO 00:29:49.053 Library execinfo found: NO 00:29:49.053 Has header "execinfo.h" : YES 00:29:49.053 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:29:49.053 Run-time dependency libarchive found: NO (tried pkgconfig) 00:29:49.053 Run-time dependency libbsd found: NO (tried pkgconfig) 00:29:49.053 Run-time dependency jansson found: NO (tried pkgconfig) 00:29:49.053 Run-time dependency openssl found: YES 3.0.9 00:29:49.053 Run-time dependency libpcap found: YES 1.10.4 00:29:49.053 Has header "pcap.h" with dependency libpcap: YES 00:29:49.053 Compiler for C supports arguments -Wcast-qual: YES 00:29:49.053 Compiler for C supports arguments -Wdeprecated: YES 00:29:49.053 Compiler for C supports arguments -Wformat: YES 00:29:49.053 Compiler for C supports arguments -Wformat-nonliteral: YES 00:29:49.053 Compiler for C supports arguments -Wformat-security: YES 00:29:49.053 Compiler for C supports arguments -Wmissing-declarations: YES 00:29:49.053 Compiler for C supports arguments -Wmissing-prototypes: YES 00:29:49.053 Compiler for C supports arguments -Wnested-externs: YES 00:29:49.053 Compiler for C supports arguments -Wold-style-definition: YES 00:29:49.053 Compiler for C supports arguments -Wpointer-arith: YES 00:29:49.053 Compiler for C supports arguments -Wsign-compare: YES 00:29:49.053 Compiler for C supports arguments -Wstrict-prototypes: YES 00:29:49.053 Compiler for C supports arguments -Wundef: YES 00:29:49.053 Compiler for C supports arguments -Wwrite-strings: YES 00:29:49.053 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:29:49.053 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:29:49.054 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:29:49.054 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:29:49.054 Program objdump found: YES (/usr/bin/objdump) 00:29:49.054 Compiler for C supports arguments -mavx512f: YES 00:29:49.054 Checking if "AVX512 checking" compiles: YES 00:29:49.054 Fetching value of define "__SSE4_2__" : 1 00:29:49.054 Fetching value of define "__AES__" : 1 00:29:49.054 Fetching value of define "__AVX__" : 1 00:29:49.054 Fetching value of define "__AVX2__" : 1 00:29:49.054 Fetching value of define "__AVX512BW__" : 1 00:29:49.054 Fetching value of define "__AVX512CD__" : 1 00:29:49.054 Fetching value of define "__AVX512DQ__" : 1 00:29:49.054 Fetching value of define "__AVX512F__" : 1 00:29:49.054 Fetching value of define "__AVX512VL__" : 1 00:29:49.054 Fetching value of define "__PCLMUL__" : 1 00:29:49.054 Fetching value of define "__RDRND__" : 1 00:29:49.054 Fetching value of define "__RDSEED__" : 1 00:29:49.054 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:29:49.054 Fetching value of define "__znver1__" : (undefined) 00:29:49.054 Fetching value of define "__znver2__" : (undefined) 00:29:49.054 Fetching value of define "__znver3__" : (undefined) 00:29:49.054 Fetching value of define "__znver4__" : (undefined) 00:29:49.054 Compiler for C supports arguments -Wno-format-truncation: YES 00:29:49.054 Checking for function "getentropy" : NO 00:29:49.054 Fetching value of define "__PCLMUL__" : 1 (cached) 00:29:49.054 Fetching value of define "__AVX512F__" : 1 (cached) 00:29:49.054 Fetching value of define "__AVX512BW__" : 1 (cached) 00:29:49.054 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:29:49.054 Fetching value of define "__AVX512VL__" : 1 (cached) 00:29:49.054 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:29:49.054 Compiler for C supports arguments -mpclmul: YES 00:29:49.054 Compiler for C supports arguments -maes: YES 00:29:49.054 Compiler for C supports arguments -mavx512f: YES (cached) 00:29:49.054 Compiler for C supports arguments -mavx512bw: YES 00:29:49.054 Compiler for C supports arguments -mavx512dq: YES 00:29:49.054 Compiler for C supports arguments -mavx512vl: YES 00:29:49.054 Compiler for C supports arguments -mvpclmulqdq: YES 00:29:49.054 Compiler for C supports arguments -mavx2: YES 00:29:49.054 Compiler for C supports arguments -mavx: YES 00:29:49.054 Compiler for C supports arguments -Wno-cast-qual: YES 00:29:49.054 Has header "linux/userfaultfd.h" : YES 00:29:49.054 Has header "linux/vduse.h" : YES 00:29:49.054 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:29:49.054 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:29:49.054 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:29:49.054 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:29:49.054 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:29:49.054 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:29:49.054 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:29:49.054 Program doxygen found: YES (/usr/bin/doxygen) 00:29:49.054 Configuring doxy-api-html.conf using configuration 00:29:49.054 Configuring doxy-api-man.conf using configuration 00:29:49.054 Program mandb found: YES (/usr/bin/mandb) 00:29:49.054 Program sphinx-build found: NO 00:29:49.054 Configuring rte_build_config.h using configuration 00:29:49.054 Message: 00:29:49.054 ================= 00:29:49.054 Applications Enabled 00:29:49.054 ================= 00:29:49.054 00:29:49.054 apps: 00:29:49.054 00:29:49.054 00:29:49.054 Message: 00:29:49.054 ================= 00:29:49.054 Libraries Enabled 00:29:49.054 ================= 00:29:49.054 00:29:49.054 libs: 00:29:49.054 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:29:49.054 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:29:49.054 cryptodev, dmadev, power, reorder, security, vhost, 00:29:49.054 00:29:49.054 Message: 00:29:49.054 =============== 00:29:49.054 Drivers Enabled 00:29:49.054 =============== 00:29:49.054 00:29:49.054 common: 00:29:49.054 00:29:49.054 bus: 00:29:49.054 pci, vdev, 00:29:49.054 mempool: 00:29:49.054 ring, 00:29:49.054 dma: 00:29:49.054 00:29:49.054 net: 00:29:49.054 00:29:49.054 crypto: 00:29:49.054 00:29:49.054 compress: 00:29:49.054 00:29:49.054 vdpa: 00:29:49.054 00:29:49.054 00:29:49.054 Message: 00:29:49.054 ================= 00:29:49.054 Content Skipped 00:29:49.054 ================= 00:29:49.054 00:29:49.054 apps: 00:29:49.054 dumpcap: explicitly disabled via build config 00:29:49.054 graph: explicitly disabled via build config 00:29:49.054 pdump: explicitly disabled via build config 00:29:49.054 proc-info: explicitly disabled via build config 00:29:49.054 test-acl: explicitly disabled via build config 00:29:49.054 test-bbdev: explicitly disabled via build config 00:29:49.054 test-cmdline: explicitly disabled via build config 00:29:49.054 test-compress-perf: explicitly disabled via build config 00:29:49.054 test-crypto-perf: explicitly disabled via build config 00:29:49.054 test-dma-perf: explicitly disabled via build config 00:29:49.054 test-eventdev: explicitly disabled via build config 00:29:49.054 test-fib: explicitly disabled via build config 00:29:49.054 test-flow-perf: explicitly disabled via build config 00:29:49.054 test-gpudev: explicitly disabled via build config 00:29:49.054 test-mldev: explicitly disabled via build config 00:29:49.054 test-pipeline: explicitly disabled via build config 00:29:49.054 test-pmd: explicitly disabled via build config 00:29:49.054 test-regex: explicitly disabled via build config 00:29:49.054 test-sad: explicitly disabled via build config 00:29:49.054 test-security-perf: explicitly disabled via build config 00:29:49.054 00:29:49.054 libs: 00:29:49.054 argparse: explicitly disabled via build config 00:29:49.054 metrics: explicitly disabled via build config 00:29:49.054 acl: explicitly disabled via build config 00:29:49.054 bbdev: explicitly disabled via build config 00:29:49.054 bitratestats: explicitly disabled via build config 00:29:49.054 bpf: explicitly disabled via build config 00:29:49.054 cfgfile: explicitly disabled via build config 00:29:49.054 distributor: explicitly disabled via build config 00:29:49.054 efd: explicitly disabled via build config 00:29:49.054 eventdev: explicitly disabled via build config 00:29:49.054 dispatcher: explicitly disabled via build config 00:29:49.054 gpudev: explicitly disabled via build config 00:29:49.054 gro: explicitly disabled via build config 00:29:49.054 gso: explicitly disabled via build config 00:29:49.054 ip_frag: explicitly disabled via build config 00:29:49.054 jobstats: explicitly disabled via build config 00:29:49.054 latencystats: explicitly disabled via build config 00:29:49.054 lpm: explicitly disabled via build config 00:29:49.054 member: explicitly disabled via build config 00:29:49.054 pcapng: explicitly disabled via build config 00:29:49.054 rawdev: explicitly disabled via build config 00:29:49.054 regexdev: explicitly disabled via build config 00:29:49.054 mldev: explicitly disabled via build config 00:29:49.054 rib: explicitly disabled via build config 00:29:49.054 sched: explicitly disabled via build config 00:29:49.054 stack: explicitly disabled via build config 00:29:49.054 ipsec: explicitly disabled via build config 00:29:49.054 pdcp: explicitly disabled via build config 00:29:49.054 fib: explicitly disabled via build config 00:29:49.054 port: explicitly disabled via build config 00:29:49.054 pdump: explicitly disabled via build config 00:29:49.054 table: explicitly disabled via build config 00:29:49.054 pipeline: explicitly disabled via build config 00:29:49.054 graph: explicitly disabled via build config 00:29:49.054 node: explicitly disabled via build config 00:29:49.054 00:29:49.054 drivers: 00:29:49.054 common/cpt: not in enabled drivers build config 00:29:49.054 common/dpaax: not in enabled drivers build config 00:29:49.054 common/iavf: not in enabled drivers build config 00:29:49.054 common/idpf: not in enabled drivers build config 00:29:49.054 common/ionic: not in enabled drivers build config 00:29:49.054 common/mvep: not in enabled drivers build config 00:29:49.054 common/octeontx: not in enabled drivers build config 00:29:49.054 bus/auxiliary: not in enabled drivers build config 00:29:49.054 bus/cdx: not in enabled drivers build config 00:29:49.054 bus/dpaa: not in enabled drivers build config 00:29:49.054 bus/fslmc: not in enabled drivers build config 00:29:49.054 bus/ifpga: not in enabled drivers build config 00:29:49.054 bus/platform: not in enabled drivers build config 00:29:49.054 bus/uacce: not in enabled drivers build config 00:29:49.054 bus/vmbus: not in enabled drivers build config 00:29:49.054 common/cnxk: not in enabled drivers build config 00:29:49.054 common/mlx5: not in enabled drivers build config 00:29:49.054 common/nfp: not in enabled drivers build config 00:29:49.054 common/nitrox: not in enabled drivers build config 00:29:49.054 common/qat: not in enabled drivers build config 00:29:49.054 common/sfc_efx: not in enabled drivers build config 00:29:49.054 mempool/bucket: not in enabled drivers build config 00:29:49.054 mempool/cnxk: not in enabled drivers build config 00:29:49.054 mempool/dpaa: not in enabled drivers build config 00:29:49.054 mempool/dpaa2: not in enabled drivers build config 00:29:49.054 mempool/octeontx: not in enabled drivers build config 00:29:49.054 mempool/stack: not in enabled drivers build config 00:29:49.054 dma/cnxk: not in enabled drivers build config 00:29:49.054 dma/dpaa: not in enabled drivers build config 00:29:49.054 dma/dpaa2: not in enabled drivers build config 00:29:49.054 dma/hisilicon: not in enabled drivers build config 00:29:49.054 dma/idxd: not in enabled drivers build config 00:29:49.054 dma/ioat: not in enabled drivers build config 00:29:49.054 dma/skeleton: not in enabled drivers build config 00:29:49.054 net/af_packet: not in enabled drivers build config 00:29:49.054 net/af_xdp: not in enabled drivers build config 00:29:49.054 net/ark: not in enabled drivers build config 00:29:49.054 net/atlantic: not in enabled drivers build config 00:29:49.054 net/avp: not in enabled drivers build config 00:29:49.054 net/axgbe: not in enabled drivers build config 00:29:49.054 net/bnx2x: not in enabled drivers build config 00:29:49.055 net/bnxt: not in enabled drivers build config 00:29:49.055 net/bonding: not in enabled drivers build config 00:29:49.055 net/cnxk: not in enabled drivers build config 00:29:49.055 net/cpfl: not in enabled drivers build config 00:29:49.055 net/cxgbe: not in enabled drivers build config 00:29:49.055 net/dpaa: not in enabled drivers build config 00:29:49.055 net/dpaa2: not in enabled drivers build config 00:29:49.055 net/e1000: not in enabled drivers build config 00:29:49.055 net/ena: not in enabled drivers build config 00:29:49.055 net/enetc: not in enabled drivers build config 00:29:49.055 net/enetfec: not in enabled drivers build config 00:29:49.055 net/enic: not in enabled drivers build config 00:29:49.055 net/failsafe: not in enabled drivers build config 00:29:49.055 net/fm10k: not in enabled drivers build config 00:29:49.055 net/gve: not in enabled drivers build config 00:29:49.055 net/hinic: not in enabled drivers build config 00:29:49.055 net/hns3: not in enabled drivers build config 00:29:49.055 net/i40e: not in enabled drivers build config 00:29:49.055 net/iavf: not in enabled drivers build config 00:29:49.055 net/ice: not in enabled drivers build config 00:29:49.055 net/idpf: not in enabled drivers build config 00:29:49.055 net/igc: not in enabled drivers build config 00:29:49.055 net/ionic: not in enabled drivers build config 00:29:49.055 net/ipn3ke: not in enabled drivers build config 00:29:49.055 net/ixgbe: not in enabled drivers build config 00:29:49.055 net/mana: not in enabled drivers build config 00:29:49.055 net/memif: not in enabled drivers build config 00:29:49.055 net/mlx4: not in enabled drivers build config 00:29:49.055 net/mlx5: not in enabled drivers build config 00:29:49.055 net/mvneta: not in enabled drivers build config 00:29:49.055 net/mvpp2: not in enabled drivers build config 00:29:49.055 net/netvsc: not in enabled drivers build config 00:29:49.055 net/nfb: not in enabled drivers build config 00:29:49.055 net/nfp: not in enabled drivers build config 00:29:49.055 net/ngbe: not in enabled drivers build config 00:29:49.055 net/null: not in enabled drivers build config 00:29:49.055 net/octeontx: not in enabled drivers build config 00:29:49.055 net/octeon_ep: not in enabled drivers build config 00:29:49.055 net/pcap: not in enabled drivers build config 00:29:49.055 net/pfe: not in enabled drivers build config 00:29:49.055 net/qede: not in enabled drivers build config 00:29:49.055 net/ring: not in enabled drivers build config 00:29:49.055 net/sfc: not in enabled drivers build config 00:29:49.055 net/softnic: not in enabled drivers build config 00:29:49.055 net/tap: not in enabled drivers build config 00:29:49.055 net/thunderx: not in enabled drivers build config 00:29:49.055 net/txgbe: not in enabled drivers build config 00:29:49.055 net/vdev_netvsc: not in enabled drivers build config 00:29:49.055 net/vhost: not in enabled drivers build config 00:29:49.055 net/virtio: not in enabled drivers build config 00:29:49.055 net/vmxnet3: not in enabled drivers build config 00:29:49.055 raw/*: missing internal dependency, "rawdev" 00:29:49.055 crypto/armv8: not in enabled drivers build config 00:29:49.055 crypto/bcmfs: not in enabled drivers build config 00:29:49.055 crypto/caam_jr: not in enabled drivers build config 00:29:49.055 crypto/ccp: not in enabled drivers build config 00:29:49.055 crypto/cnxk: not in enabled drivers build config 00:29:49.055 crypto/dpaa_sec: not in enabled drivers build config 00:29:49.055 crypto/dpaa2_sec: not in enabled drivers build config 00:29:49.055 crypto/ipsec_mb: not in enabled drivers build config 00:29:49.055 crypto/mlx5: not in enabled drivers build config 00:29:49.055 crypto/mvsam: not in enabled drivers build config 00:29:49.055 crypto/nitrox: not in enabled drivers build config 00:29:49.055 crypto/null: not in enabled drivers build config 00:29:49.055 crypto/octeontx: not in enabled drivers build config 00:29:49.055 crypto/openssl: not in enabled drivers build config 00:29:49.055 crypto/scheduler: not in enabled drivers build config 00:29:49.055 crypto/uadk: not in enabled drivers build config 00:29:49.055 crypto/virtio: not in enabled drivers build config 00:29:49.055 compress/isal: not in enabled drivers build config 00:29:49.055 compress/mlx5: not in enabled drivers build config 00:29:49.055 compress/nitrox: not in enabled drivers build config 00:29:49.055 compress/octeontx: not in enabled drivers build config 00:29:49.055 compress/zlib: not in enabled drivers build config 00:29:49.055 regex/*: missing internal dependency, "regexdev" 00:29:49.055 ml/*: missing internal dependency, "mldev" 00:29:49.055 vdpa/ifc: not in enabled drivers build config 00:29:49.055 vdpa/mlx5: not in enabled drivers build config 00:29:49.055 vdpa/nfp: not in enabled drivers build config 00:29:49.055 vdpa/sfc: not in enabled drivers build config 00:29:49.055 event/*: missing internal dependency, "eventdev" 00:29:49.055 baseband/*: missing internal dependency, "bbdev" 00:29:49.055 gpu/*: missing internal dependency, "gpudev" 00:29:49.055 00:29:49.055 00:29:49.055 Build targets in project: 61 00:29:49.055 00:29:49.055 DPDK 24.03.0 00:29:49.055 00:29:49.055 User defined options 00:29:49.055 default_library : static 00:29:49.055 libdir : lib 00:29:49.055 prefix : /mnt/sdadir/spdk/dpdk/build 00:29:49.055 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Wno-error 00:29:49.055 c_link_args : 00:29:49.055 cpu_instruction_set: native 00:29:49.055 disable_apps : test-flow-perf,test-fib,test-pmd,graph,test-pipeline,pdump,test-compress-perf,test-mldev,test-bbdev,test-sad,test-eventdev,test-gpudev,test-regex,test-dma-perf,test-crypto-perf,proc-info,test-security-perf,test,test-acl,dumpcap,test-cmdline 00:29:49.055 disable_libs : dispatcher,rawdev,pdcp,bpf,graph,cfgfile,ip_frag,gpudev,ipsec,rib,pdump,distributor,argparse,efd,member,fib,gro,node,stack,pcapng,latencystats,gso,eventdev,sched,acl,bbdev,bitratestats,regexdev,port,metrics,lpm,table,mldev,jobstats,pipeline 00:29:49.055 enable_docs : false 00:29:49.055 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:29:49.055 enable_kmods : false 00:29:49.055 max_lcores : 128 00:29:49.055 tests : false 00:29:49.055 00:29:49.055 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:29:49.055 ninja: Entering directory `/mnt/sdadir/spdk/dpdk/build-tmp' 00:29:49.055 [1/244] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:29:49.055 [2/244] Compiling C object lib/librte_log.a.p/log_log.c.o 00:29:49.055 [3/244] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:29:49.055 [4/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:29:49.055 [5/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:29:49.055 [6/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:29:49.055 [7/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:29:49.055 [8/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:29:49.055 [9/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:29:49.055 [10/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:29:49.055 [11/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:29:49.055 [12/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:29:49.055 [13/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:29:49.055 [14/244] Linking static target lib/librte_log.a 00:29:49.055 [15/244] Linking target lib/librte_log.so.24.1 00:29:49.055 [16/244] Linking static target lib/librte_kvargs.a 00:29:49.055 [17/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:29:49.055 [18/244] Linking static target lib/librte_telemetry.a 00:29:49.055 [19/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:29:49.055 [20/244] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:29:49.055 [21/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:29:49.055 [22/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:29:49.055 [23/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:29:49.055 [24/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:29:49.055 [25/244] Linking target lib/librte_kvargs.so.24.1 00:29:49.055 [26/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:29:49.055 [27/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:29:49.055 [28/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:29:49.055 [29/244] Linking target lib/librte_telemetry.so.24.1 00:29:49.055 [30/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:29:49.055 [31/244] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:29:49.055 [32/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:29:49.055 [33/244] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:29:49.055 [34/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:29:49.055 [35/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:29:49.055 [36/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:29:49.055 [37/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:29:49.055 [38/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:29:49.055 [39/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:29:49.055 [40/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:29:49.055 [41/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:29:49.055 [42/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:29:49.055 [43/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:29:49.314 [44/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:29:49.314 [45/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:29:49.314 [46/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:29:49.314 [47/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:29:49.314 [48/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:29:49.573 [49/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:29:49.573 [50/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:29:49.573 [51/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:29:49.573 [52/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:29:49.573 [53/244] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:29:49.573 [54/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:29:49.573 [55/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:29:49.832 [56/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:29:49.832 [57/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:29:49.832 [58/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:29:49.832 [59/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:29:49.832 [60/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:29:49.832 [61/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:29:49.832 [62/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:29:50.090 [63/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:29:50.090 [64/244] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:29:50.090 [65/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:29:50.348 [66/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:29:50.348 [67/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:29:50.348 [68/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:29:50.348 [69/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:29:50.348 [70/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:29:50.348 [71/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:29:50.348 [72/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:29:50.348 [73/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:29:50.348 [74/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:29:50.607 [75/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:29:50.607 [76/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:29:50.607 [77/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:29:50.607 [78/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:29:50.865 [79/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:29:50.865 [80/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:29:50.865 [81/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:29:50.865 [82/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:29:50.865 [83/244] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:29:51.124 [84/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:29:51.124 [85/244] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:29:51.124 [86/244] Linking static target lib/librte_ring.a 00:29:51.124 [87/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:29:51.381 [88/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:29:51.381 [89/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:29:51.381 [90/244] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:29:51.381 [91/244] Linking static target lib/net/libnet_crc_avx512_lib.a 00:29:51.381 [92/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:29:51.381 [93/244] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:29:51.381 [94/244] Linking static target lib/librte_mempool.a 00:29:51.381 [95/244] Linking static target lib/librte_rcu.a 00:29:51.381 [96/244] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:29:51.641 [97/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:29:51.641 [98/244] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:29:51.641 [99/244] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:29:51.641 [100/244] Linking static target lib/librte_eal.a 00:29:51.641 [101/244] Linking target lib/librte_eal.so.24.1 00:29:51.641 [102/244] Linking static target lib/librte_mbuf.a 00:29:51.641 [103/244] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:29:51.641 [104/244] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:29:51.900 [105/244] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:29:51.900 [106/244] Linking static target lib/librte_net.a 00:29:51.900 [107/244] Linking static target lib/librte_meter.a 00:29:51.900 [108/244] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:29:51.900 [109/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:29:51.900 [110/244] Linking target lib/librte_ring.so.24.1 00:29:52.158 [111/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:29:52.158 [112/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:29:52.158 [113/244] Linking target lib/librte_meter.so.24.1 00:29:52.158 [114/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:29:52.158 [115/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:29:52.158 [116/244] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:29:52.416 [117/244] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:29:52.416 [118/244] Linking target lib/librte_rcu.so.24.1 00:29:52.416 [119/244] Linking target lib/librte_mempool.so.24.1 00:29:52.416 [120/244] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:29:52.416 [121/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:29:52.674 [122/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:29:52.674 [123/244] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:29:52.674 [124/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:29:52.674 [125/244] Linking target lib/librte_mbuf.so.24.1 00:29:52.674 [126/244] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:29:52.674 [127/244] Linking static target lib/librte_pci.a 00:29:52.674 [128/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:29:52.932 [129/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:29:52.932 [130/244] Linking target lib/librte_pci.so.24.1 00:29:52.932 [131/244] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:29:52.932 [132/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:29:52.932 [133/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:29:52.932 [134/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:29:52.932 [135/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:29:52.932 [136/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:29:52.932 [137/244] Linking target lib/librte_net.so.24.1 00:29:53.190 [138/244] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:29:53.190 [139/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:29:53.190 [140/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:29:53.190 [141/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:29:53.190 [142/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:29:53.190 [143/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:29:53.190 [144/244] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:29:53.190 [145/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:29:53.190 [146/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:29:53.190 [147/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:29:53.190 [148/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:29:53.190 [149/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:29:53.190 [150/244] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:29:53.448 [151/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:29:53.448 [152/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:29:53.448 [153/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:29:53.448 [154/244] Linking static target lib/librte_cmdline.a 00:29:53.706 [155/244] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:29:53.706 [156/244] Linking target lib/librte_cmdline.so.24.1 00:29:53.706 [157/244] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:29:53.706 [158/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:29:53.965 [159/244] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:29:53.965 [160/244] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:29:53.965 [161/244] Linking static target lib/librte_timer.a 00:29:53.965 [162/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:29:53.965 [163/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:29:53.965 [164/244] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:29:53.965 [165/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:29:53.965 [166/244] Linking target lib/librte_timer.so.24.1 00:29:53.965 [167/244] Linking static target lib/librte_compressdev.a 00:29:53.965 [168/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:29:54.223 [169/244] Linking target lib/librte_compressdev.so.24.1 00:29:54.223 [170/244] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:29:54.223 [171/244] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:29:54.223 [172/244] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:29:54.223 [173/244] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:29:54.790 [174/244] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:29:54.790 [175/244] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:29:54.790 [176/244] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:29:54.790 [177/244] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:29:54.790 [178/244] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:29:54.790 [179/244] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:29:54.790 [180/244] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:29:54.790 [181/244] Linking static target lib/librte_dmadev.a 00:29:55.048 [182/244] Linking target lib/librte_dmadev.so.24.1 00:29:55.048 [183/244] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:29:55.048 [184/244] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:29:55.048 [185/244] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:29:55.048 [186/244] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:29:55.307 [187/244] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:29:55.307 [188/244] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:29:55.307 [189/244] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:29:55.307 [190/244] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:29:55.565 [191/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:29:55.565 [192/244] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:29:55.565 [193/244] Linking static target lib/librte_power.a 00:29:55.565 [194/244] Linking static target lib/librte_reorder.a 00:29:55.565 [195/244] Linking static target lib/librte_hash.a 00:29:55.565 [196/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:29:55.565 [197/244] Linking target lib/librte_hash.so.24.1 00:29:55.565 [198/244] Linking static target lib/librte_security.a 00:29:55.565 [199/244] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:29:55.824 [200/244] Linking target lib/librte_reorder.so.24.1 00:29:55.824 [201/244] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:29:55.824 [202/244] Linking target lib/librte_cryptodev.so.24.1 00:29:55.824 [203/244] Linking static target lib/librte_cryptodev.a 00:29:56.082 [204/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:29:56.082 [205/244] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:29:56.082 [206/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:29:56.340 [207/244] Linking target lib/librte_ethdev.so.24.1 00:29:56.340 [208/244] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:29:56.340 [209/244] Linking target lib/librte_security.so.24.1 00:29:56.340 [210/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:29:56.340 [211/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:29:56.340 [212/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:29:56.340 [213/244] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:29:56.340 [214/244] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:29:56.598 [215/244] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:29:56.598 [216/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:29:56.598 [217/244] Linking static target lib/librte_ethdev.a 00:29:56.598 [218/244] Linking target lib/librte_power.so.24.1 00:29:56.856 [219/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:29:56.856 [220/244] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:29:56.856 [221/244] Linking static target drivers/libtmp_rte_bus_vdev.a 00:29:56.856 [222/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:29:57.114 [223/244] Linking static target drivers/libtmp_rte_bus_pci.a 00:29:57.114 [224/244] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:29:57.373 [225/244] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:29:57.373 [226/244] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:29:57.373 [227/244] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:29:57.373 [228/244] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:29:57.373 [229/244] Linking static target drivers/libtmp_rte_mempool_ring.a 00:29:57.373 [230/244] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:29:57.373 [231/244] Linking static target drivers/librte_bus_vdev.a 00:29:57.373 [232/244] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:29:57.631 [233/244] Linking target drivers/librte_bus_vdev.so.24.1 00:29:57.631 [234/244] Linking static target drivers/librte_bus_pci.a 00:29:57.631 [235/244] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:29:57.631 [236/244] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:29:57.631 [237/244] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:29:57.631 [238/244] Linking target drivers/librte_bus_pci.so.24.1 00:29:57.889 [239/244] Linking static target drivers/librte_mempool_ring.a 00:29:57.889 [240/244] Linking target drivers/librte_mempool_ring.so.24.1 00:29:59.265 [241/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:30:05.828 [242/244] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:30:05.828 [243/244] Linking target lib/librte_vhost.so.24.1 00:30:05.828 [244/244] Linking static target lib/librte_vhost.a 00:30:05.828 INFO: autodetecting backend as ninja 00:30:05.828 INFO: calculating backend command to run: /usr/local/bin/ninja -C /mnt/sdadir/spdk/dpdk/build-tmp 00:30:11.094 CC lib/log/log.o 00:30:11.094 CC lib/log/log_flags.o 00:30:11.094 CC lib/log/log_deprecated.o 00:30:11.094 CC lib/ut_mock/mock.o 00:30:11.094 LIB libspdk_ut_mock.a 00:30:11.094 LIB libspdk_log.a 00:30:11.094 CC lib/ioat/ioat.o 00:30:11.352 CC lib/dma/dma.o 00:30:11.352 CXX lib/trace_parser/trace.o 00:30:11.352 CC lib/util/base64.o 00:30:11.352 CC lib/util/bit_array.o 00:30:11.352 CC lib/util/cpuset.o 00:30:11.352 CC lib/util/crc16.o 00:30:11.352 CC lib/util/crc32.o 00:30:11.353 CC lib/util/crc32c.o 00:30:11.353 CC lib/util/crc32_ieee.o 00:30:11.353 CC lib/util/crc64.o 00:30:11.353 CC lib/util/dif.o 00:30:11.353 CC lib/util/fd.o 00:30:11.353 CC lib/util/fd_group.o 00:30:11.353 CC lib/util/file.o 00:30:11.353 CC lib/util/hexlify.o 00:30:11.353 CC lib/util/iov.o 00:30:11.353 CC lib/util/math.o 00:30:11.353 CC lib/util/net.o 00:30:11.353 CC lib/util/pipe.o 00:30:11.353 CC lib/util/string.o 00:30:11.353 CC lib/util/xor.o 00:30:11.353 CC lib/util/zipf.o 00:30:11.353 CC lib/util/strerror_tls.o 00:30:11.353 CC lib/util/uuid.o 00:30:11.611 CC lib/vfio_user/host/vfio_user_pci.o 00:30:11.611 CC lib/vfio_user/host/vfio_user.o 00:30:11.869 LIB libspdk_dma.a 00:30:12.127 LIB libspdk_ioat.a 00:30:12.127 LIB libspdk_vfio_user.a 00:30:12.695 LIB libspdk_trace_parser.a 00:30:12.695 LIB libspdk_util.a 00:30:13.261 CC lib/vmd/led.o 00:30:13.261 CC lib/vmd/vmd.o 00:30:13.261 CC lib/json/json_parse.o 00:30:13.261 CC lib/json/json_util.o 00:30:13.261 CC lib/json/json_write.o 00:30:13.261 CC lib/conf/conf.o 00:30:13.261 CC lib/env_dpdk/env.o 00:30:13.261 CC lib/env_dpdk/memory.o 00:30:13.261 CC lib/env_dpdk/pci.o 00:30:13.261 CC lib/env_dpdk/init.o 00:30:13.261 CC lib/env_dpdk/threads.o 00:30:13.261 CC lib/env_dpdk/pci_ioat.o 00:30:13.261 CC lib/env_dpdk/pci_virtio.o 00:30:13.261 CC lib/env_dpdk/pci_vmd.o 00:30:13.261 CC lib/env_dpdk/pci_idxd.o 00:30:13.261 CC lib/env_dpdk/pci_event.o 00:30:13.261 CC lib/env_dpdk/sigbus_handler.o 00:30:13.261 CC lib/env_dpdk/pci_dpdk.o 00:30:13.261 CC lib/env_dpdk/pci_dpdk_2207.o 00:30:13.261 CC lib/env_dpdk/pci_dpdk_2211.o 00:30:13.827 LIB libspdk_conf.a 00:30:14.087 LIB libspdk_json.a 00:30:14.087 LIB libspdk_vmd.a 00:30:14.677 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:30:14.677 CC lib/jsonrpc/jsonrpc_client.o 00:30:14.677 CC lib/jsonrpc/jsonrpc_server.o 00:30:14.677 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:30:15.243 LIB libspdk_jsonrpc.a 00:30:15.243 LIB libspdk_env_dpdk.a 00:30:15.811 CC lib/rpc/rpc.o 00:30:16.070 LIB libspdk_rpc.a 00:30:16.637 CC lib/keyring/keyring_rpc.o 00:30:16.637 CC lib/keyring/keyring.o 00:30:16.637 CC lib/trace/trace.o 00:30:16.637 CC lib/trace/trace_rpc.o 00:30:16.637 CC lib/trace/trace_flags.o 00:30:16.637 CC lib/notify/notify_rpc.o 00:30:16.637 CC lib/notify/notify.o 00:30:16.896 LIB libspdk_notify.a 00:30:16.896 LIB libspdk_keyring.a 00:30:17.155 LIB libspdk_trace.a 00:30:17.721 CC lib/sock/sock_rpc.o 00:30:17.721 CC lib/sock/sock.o 00:30:17.721 CC lib/thread/iobuf.o 00:30:17.721 CC lib/thread/thread.o 00:30:18.288 LIB libspdk_sock.a 00:30:18.855 CC lib/nvme/nvme_ctrlr_cmd.o 00:30:18.855 CC lib/nvme/nvme_ctrlr.o 00:30:18.855 CC lib/nvme/nvme_fabric.o 00:30:18.855 CC lib/nvme/nvme_ns_cmd.o 00:30:18.855 CC lib/nvme/nvme_ns.o 00:30:18.855 CC lib/nvme/nvme_pcie_common.o 00:30:18.855 CC lib/nvme/nvme_pcie.o 00:30:18.855 CC lib/nvme/nvme_qpair.o 00:30:18.855 CC lib/nvme/nvme.o 00:30:18.855 CC lib/nvme/nvme_quirks.o 00:30:18.855 CC lib/nvme/nvme_transport.o 00:30:18.855 CC lib/nvme/nvme_discovery.o 00:30:18.855 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:30:18.855 CC lib/nvme/nvme_tcp.o 00:30:18.855 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:30:18.855 CC lib/nvme/nvme_opal.o 00:30:18.855 CC lib/nvme/nvme_io_msg.o 00:30:18.855 CC lib/nvme/nvme_poll_group.o 00:30:18.855 CC lib/nvme/nvme_zns.o 00:30:18.855 CC lib/nvme/nvme_stubs.o 00:30:18.855 CC lib/nvme/nvme_auth.o 00:30:18.855 CC lib/nvme/nvme_cuse.o 00:30:19.113 LIB libspdk_thread.a 00:30:20.048 CC lib/accel/accel.o 00:30:20.048 CC lib/blob/blobstore.o 00:30:20.048 CC lib/blob/request.o 00:30:20.048 CC lib/accel/accel_rpc.o 00:30:20.048 CC lib/init/json_config.o 00:30:20.048 CC lib/blob/zeroes.o 00:30:20.048 CC lib/blob/blob_bs_dev.o 00:30:20.048 CC lib/init/subsystem.o 00:30:20.048 CC lib/accel/accel_sw.o 00:30:20.048 CC lib/init/subsystem_rpc.o 00:30:20.048 CC lib/init/rpc.o 00:30:20.048 CC lib/virtio/virtio.o 00:30:20.048 CC lib/virtio/virtio_vhost_user.o 00:30:20.048 CC lib/virtio/virtio_vfio_user.o 00:30:20.048 CC lib/virtio/virtio_pci.o 00:30:20.983 LIB libspdk_init.a 00:30:20.983 LIB libspdk_virtio.a 00:30:21.242 CC lib/event/app.o 00:30:21.242 CC lib/event/reactor.o 00:30:21.242 CC lib/event/app_rpc.o 00:30:21.242 CC lib/event/log_rpc.o 00:30:21.242 CC lib/event/scheduler_static.o 00:30:21.809 LIB libspdk_accel.a 00:30:22.068 LIB libspdk_event.a 00:30:22.068 LIB libspdk_nvme.a 00:30:22.636 CC lib/bdev/bdev_rpc.o 00:30:22.636 CC lib/bdev/bdev.o 00:30:22.636 CC lib/bdev/bdev_zone.o 00:30:22.636 CC lib/bdev/part.o 00:30:22.636 CC lib/bdev/scsi_nvme.o 00:30:23.573 LIB libspdk_blob.a 00:30:24.537 CC lib/lvol/lvol.o 00:30:24.537 CC lib/blobfs/blobfs.o 00:30:24.537 CC lib/blobfs/tree.o 00:30:25.105 LIB libspdk_bdev.a 00:30:25.364 LIB libspdk_blobfs.a 00:30:25.622 LIB libspdk_lvol.a 00:30:26.558 CC lib/nvmf/ctrlr.o 00:30:26.558 CC lib/nvmf/ctrlr_discovery.o 00:30:26.558 CC lib/nvmf/ctrlr_bdev.o 00:30:26.558 CC lib/nvmf/subsystem.o 00:30:26.558 CC lib/ftl/ftl_core.o 00:30:26.558 CC lib/nbd/nbd.o 00:30:26.558 CC lib/ftl/ftl_init.o 00:30:26.558 CC lib/nvmf/nvmf.o 00:30:26.558 CC lib/nvmf/nvmf_rpc.o 00:30:26.558 CC lib/nbd/nbd_rpc.o 00:30:26.558 CC lib/ftl/ftl_layout.o 00:30:26.558 CC lib/nvmf/transport.o 00:30:26.558 CC lib/nvmf/tcp.o 00:30:26.558 CC lib/ftl/ftl_io.o 00:30:26.558 CC lib/nvmf/stubs.o 00:30:26.558 CC lib/ftl/ftl_sb.o 00:30:26.558 CC lib/ftl/ftl_debug.o 00:30:26.558 CC lib/nvmf/mdns_server.o 00:30:26.558 CC lib/ftl/ftl_l2p.o 00:30:26.558 CC lib/scsi/dev.o 00:30:26.558 CC lib/nvmf/auth.o 00:30:26.558 CC lib/scsi/lun.o 00:30:26.558 CC lib/ftl/ftl_l2p_flat.o 00:30:26.558 CC lib/scsi/port.o 00:30:26.558 CC lib/ftl/ftl_nv_cache.o 00:30:26.558 CC lib/scsi/scsi.o 00:30:26.558 CC lib/ftl/ftl_band.o 00:30:26.558 CC lib/scsi/scsi_bdev.o 00:30:26.558 CC lib/ftl/ftl_writer.o 00:30:26.558 CC lib/scsi/scsi_pr.o 00:30:26.558 CC lib/ftl/ftl_band_ops.o 00:30:26.558 CC lib/ftl/ftl_rq.o 00:30:26.558 CC lib/scsi/scsi_rpc.o 00:30:26.558 CC lib/ftl/ftl_reloc.o 00:30:26.558 CC lib/scsi/task.o 00:30:26.558 CC lib/ftl/ftl_l2p_cache.o 00:30:26.558 CC lib/ftl/ftl_p2l.o 00:30:26.558 CC lib/ftl/mngt/ftl_mngt.o 00:30:26.558 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:30:26.558 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:30:26.558 CC lib/ftl/mngt/ftl_mngt_startup.o 00:30:26.558 CC lib/ftl/mngt/ftl_mngt_md.o 00:30:26.558 CC lib/ftl/mngt/ftl_mngt_misc.o 00:30:26.558 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:30:26.558 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:30:26.558 CC lib/ftl/mngt/ftl_mngt_band.o 00:30:26.558 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:30:26.558 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:30:26.558 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:30:26.558 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:30:26.558 CC lib/ftl/utils/ftl_conf.o 00:30:26.558 CC lib/ftl/utils/ftl_md.o 00:30:26.558 CC lib/ftl/utils/ftl_mempool.o 00:30:26.558 CC lib/ftl/utils/ftl_bitmap.o 00:30:26.558 CC lib/ftl/utils/ftl_property.o 00:30:26.558 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:30:26.817 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:30:26.817 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:30:26.817 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:30:26.817 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:30:26.817 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:30:26.817 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:30:26.817 CC lib/ftl/upgrade/ftl_sb_v3.o 00:30:26.817 CC lib/ftl/upgrade/ftl_sb_v5.o 00:30:26.817 CC lib/ftl/nvc/ftl_nvc_dev.o 00:30:26.817 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:30:26.817 CC lib/ftl/base/ftl_base_dev.o 00:30:26.817 CC lib/ftl/base/ftl_base_bdev.o 00:30:28.716 LIB libspdk_nbd.a 00:30:28.716 LIB libspdk_scsi.a 00:30:28.716 LIB libspdk_ftl.a 00:30:28.973 CC lib/vhost/vhost.o 00:30:28.973 CC lib/vhost/vhost_scsi.o 00:30:28.973 CC lib/vhost/vhost_rpc.o 00:30:28.973 CC lib/vhost/vhost_blk.o 00:30:28.973 CC lib/vhost/rte_vhost_user.o 00:30:28.973 CC lib/iscsi/conn.o 00:30:28.973 CC lib/iscsi/init_grp.o 00:30:28.973 CC lib/iscsi/iscsi.o 00:30:28.973 CC lib/iscsi/md5.o 00:30:28.973 CC lib/iscsi/param.o 00:30:28.973 CC lib/iscsi/portal_grp.o 00:30:28.973 CC lib/iscsi/tgt_node.o 00:30:28.973 CC lib/iscsi/iscsi_subsystem.o 00:30:28.973 CC lib/iscsi/iscsi_rpc.o 00:30:28.973 CC lib/iscsi/task.o 00:30:29.538 LIB libspdk_nvmf.a 00:30:30.472 LIB libspdk_vhost.a 00:30:30.730 LIB libspdk_iscsi.a 00:30:34.039 CC module/env_dpdk/env_dpdk_rpc.o 00:30:34.039 CC module/keyring/linux/keyring.o 00:30:34.039 CC module/blob/bdev/blob_bdev.o 00:30:34.039 CC module/keyring/linux/keyring_rpc.o 00:30:34.039 CC module/keyring/file/keyring.o 00:30:34.039 CC module/scheduler/gscheduler/gscheduler.o 00:30:34.039 CC module/keyring/file/keyring_rpc.o 00:30:34.039 CC module/sock/posix/posix.o 00:30:34.039 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:30:34.039 CC module/accel/error/accel_error.o 00:30:34.039 CC module/accel/error/accel_error_rpc.o 00:30:34.039 CC module/accel/ioat/accel_ioat.o 00:30:34.039 CC module/accel/ioat/accel_ioat_rpc.o 00:30:34.039 CC module/scheduler/dynamic/scheduler_dynamic.o 00:30:34.039 LIB libspdk_env_dpdk_rpc.a 00:30:34.605 LIB libspdk_scheduler_gscheduler.a 00:30:34.605 LIB libspdk_keyring_linux.a 00:30:34.605 LIB libspdk_keyring_file.a 00:30:34.605 LIB libspdk_scheduler_dpdk_governor.a 00:30:34.605 LIB libspdk_accel_error.a 00:30:34.605 LIB libspdk_accel_ioat.a 00:30:34.605 LIB libspdk_scheduler_dynamic.a 00:30:34.605 LIB libspdk_blob_bdev.a 00:30:34.864 LIB libspdk_sock_posix.a 00:30:35.123 CC module/bdev/aio/bdev_aio_rpc.o 00:30:35.123 CC module/bdev/aio/bdev_aio.o 00:30:35.123 CC module/bdev/delay/vbdev_delay.o 00:30:35.123 CC module/bdev/gpt/gpt.o 00:30:35.123 CC module/bdev/delay/vbdev_delay_rpc.o 00:30:35.123 CC module/bdev/gpt/vbdev_gpt.o 00:30:35.123 CC module/bdev/malloc/bdev_malloc.o 00:30:35.123 CC module/bdev/malloc/bdev_malloc_rpc.o 00:30:35.123 CC module/bdev/zone_block/vbdev_zone_block.o 00:30:35.381 CC module/blobfs/bdev/blobfs_bdev.o 00:30:35.381 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:30:35.381 CC module/bdev/virtio/bdev_virtio_scsi.o 00:30:35.381 CC module/bdev/passthru/vbdev_passthru.o 00:30:35.381 CC module/bdev/virtio/bdev_virtio_blk.o 00:30:35.381 CC module/bdev/virtio/bdev_virtio_rpc.o 00:30:35.381 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:30:35.381 CC module/bdev/raid/bdev_raid.o 00:30:35.381 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:30:35.381 CC module/bdev/raid/bdev_raid_rpc.o 00:30:35.381 CC module/bdev/nvme/bdev_nvme.o 00:30:35.381 CC module/bdev/raid/bdev_raid_sb.o 00:30:35.381 CC module/bdev/lvol/vbdev_lvol.o 00:30:35.381 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:30:35.381 CC module/bdev/error/vbdev_error.o 00:30:35.381 CC module/bdev/split/vbdev_split.o 00:30:35.381 CC module/bdev/raid/raid0.o 00:30:35.381 CC module/bdev/nvme/bdev_nvme_rpc.o 00:30:35.381 CC module/bdev/error/vbdev_error_rpc.o 00:30:35.381 CC module/bdev/raid/raid1.o 00:30:35.381 CC module/bdev/split/vbdev_split_rpc.o 00:30:35.381 CC module/bdev/raid/concat.o 00:30:35.381 CC module/bdev/nvme/nvme_rpc.o 00:30:35.381 CC module/bdev/nvme/bdev_mdns_client.o 00:30:35.381 CC module/bdev/nvme/vbdev_opal.o 00:30:35.381 CC module/bdev/nvme/vbdev_opal_rpc.o 00:30:35.381 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:30:35.381 CC module/bdev/null/bdev_null.o 00:30:35.381 CC module/bdev/null/bdev_null_rpc.o 00:30:35.381 CC module/bdev/ftl/bdev_ftl.o 00:30:35.381 CC module/bdev/ftl/bdev_ftl_rpc.o 00:30:36.314 LIB libspdk_blobfs_bdev.a 00:30:36.314 LIB libspdk_bdev_null.a 00:30:36.314 LIB libspdk_bdev_split.a 00:30:36.314 LIB libspdk_bdev_passthru.a 00:30:36.314 LIB libspdk_bdev_malloc.a 00:30:36.314 LIB libspdk_bdev_delay.a 00:30:36.314 LIB libspdk_bdev_gpt.a 00:30:36.314 LIB libspdk_bdev_error.a 00:30:36.314 LIB libspdk_bdev_aio.a 00:30:36.314 LIB libspdk_bdev_zone_block.a 00:30:36.572 LIB libspdk_bdev_ftl.a 00:30:36.572 LIB libspdk_bdev_virtio.a 00:30:36.572 LIB libspdk_bdev_lvol.a 00:30:36.830 LIB libspdk_bdev_raid.a 00:30:38.204 LIB libspdk_bdev_nvme.a 00:30:39.579 CC module/event/subsystems/iobuf/iobuf.o 00:30:39.579 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:30:39.579 CC module/event/subsystems/sock/sock.o 00:30:39.579 CC module/event/subsystems/scheduler/scheduler.o 00:30:39.579 CC module/event/subsystems/vmd/vmd.o 00:30:39.579 CC module/event/subsystems/vmd/vmd_rpc.o 00:30:39.579 CC module/event/subsystems/keyring/keyring.o 00:30:39.579 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:30:39.837 LIB libspdk_event_keyring.a 00:30:39.837 LIB libspdk_event_sock.a 00:30:39.837 LIB libspdk_event_vhost_blk.a 00:30:39.837 LIB libspdk_event_scheduler.a 00:30:39.837 LIB libspdk_event_iobuf.a 00:30:39.837 LIB libspdk_event_vmd.a 00:30:40.404 CC module/event/subsystems/accel/accel.o 00:30:40.661 LIB libspdk_event_accel.a 00:30:41.228 CC module/event/subsystems/bdev/bdev.o 00:30:41.487 LIB libspdk_event_bdev.a 00:30:41.745 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:30:41.745 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:30:41.745 CC module/event/subsystems/scsi/scsi.o 00:30:41.745 CC module/event/subsystems/nbd/nbd.o 00:30:42.312 LIB libspdk_event_scsi.a 00:30:42.312 LIB libspdk_event_nbd.a 00:30:42.312 LIB libspdk_event_nvmf.a 00:30:42.571 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:30:42.571 CC module/event/subsystems/iscsi/iscsi.o 00:30:42.830 LIB libspdk_event_vhost_scsi.a 00:30:42.830 LIB libspdk_event_iscsi.a 00:30:43.397 make[1]: Nothing to be done for 'all'. 00:30:43.397 CC app/trace_record/trace_record.o 00:30:43.397 CXX app/trace/trace.o 00:30:43.397 CC app/spdk_nvme_identify/identify.o 00:30:43.397 CC app/spdk_lspci/spdk_lspci.o 00:30:43.397 CC app/spdk_nvme_discover/discovery_aer.o 00:30:43.397 CC app/spdk_top/spdk_top.o 00:30:43.397 CC app/spdk_nvme_perf/perf.o 00:30:43.397 CC examples/interrupt_tgt/interrupt_tgt.o 00:30:43.397 CC app/spdk_dd/spdk_dd.o 00:30:43.397 CC app/iscsi_tgt/iscsi_tgt.o 00:30:43.397 CC app/nvmf_tgt/nvmf_main.o 00:30:43.656 CC app/spdk_tgt/spdk_tgt.o 00:30:43.656 CC examples/util/zipf/zipf.o 00:30:43.656 CC examples/ioat/verify/verify.o 00:30:43.656 CC examples/ioat/perf/perf.o 00:30:43.914 LINK spdk_lspci 00:30:43.914 LINK iscsi_tgt 00:30:43.914 LINK zipf 00:30:43.914 LINK nvmf_tgt 00:30:43.914 LINK interrupt_tgt 00:30:43.914 LINK spdk_tgt 00:30:43.914 LINK verify 00:30:44.173 LINK spdk_nvme_discover 00:30:44.173 LINK spdk_trace_record 00:30:44.173 LINK ioat_perf 00:30:44.173 LINK spdk_trace 00:30:44.173 LINK spdk_dd 00:30:45.110 LINK spdk_nvme_perf 00:30:45.110 LINK spdk_top 00:30:45.110 LINK spdk_nvme_identify 00:30:46.046 CC app/vhost/vhost.o 00:30:46.615 LINK vhost 00:30:48.536 CC examples/sock/hello_world/hello_sock.o 00:30:48.536 CC examples/vmd/led/led.o 00:30:48.536 CC examples/vmd/lsvmd/lsvmd.o 00:30:48.536 CC examples/thread/thread/thread_ex.o 00:30:48.794 LINK led 00:30:48.794 LINK lsvmd 00:30:49.052 LINK hello_sock 00:30:49.052 LINK thread 00:30:57.165 CC examples/nvme/hotplug/hotplug.o 00:30:57.165 CC examples/nvme/nvme_manage/nvme_manage.o 00:30:57.165 CC examples/nvme/cmb_copy/cmb_copy.o 00:30:57.165 CC examples/nvme/arbitration/arbitration.o 00:30:57.165 CC examples/nvme/hello_world/hello_world.o 00:30:57.165 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:30:57.165 CC examples/nvme/reconnect/reconnect.o 00:30:57.165 CC examples/nvme/abort/abort.o 00:30:57.165 LINK pmr_persistence 00:30:57.165 LINK cmb_copy 00:30:57.165 LINK hotplug 00:30:57.165 LINK hello_world 00:30:57.165 LINK arbitration 00:30:57.165 LINK reconnect 00:30:57.165 LINK abort 00:30:57.165 LINK nvme_manage 00:31:00.448 CC examples/accel/perf/accel_perf.o 00:31:00.448 CC examples/blob/cli/blobcli.o 00:31:00.448 CC examples/blob/hello_world/hello_blob.o 00:31:00.707 LINK hello_blob 00:31:00.991 LINK accel_perf 00:31:01.271 LINK blobcli 00:31:06.537 CC examples/bdev/hello_world/hello_bdev.o 00:31:06.537 CC examples/bdev/bdevperf/bdevperf.o 00:31:06.537 LINK hello_bdev 00:31:06.796 LINK bdevperf 00:31:13.360 CC examples/nvmf/nvmf/nvmf.o 00:31:13.360 LINK nvmf 00:31:18.631 make: Leaving directory '/mnt/sdadir/spdk' 00:31:18.631 05:20:32 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@101 -- # rm -rf /mnt/sdadir/spdk 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@102 -- # umount /mnt/sdadir 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@103 -- # rm -rf /mnt/sdadir 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@105 -- # stats=($(cat "/sys/block/$dev/stat")) 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@105 -- # cat /sys/block/sda/stat 00:31:50.702 READ IO cnt: 99 merges: 0 sectors: 3328 ticks: 67 00:31:50.702 WRITE IO cnt: 632626 merges: 629893 sectors: 10887528 ticks: 429281 00:31:50.702 in flight: 0 io ticks: 210759 time in queue: 468492 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@107 -- # printf 'READ IO cnt: % 8u merges: % 8u sectors: % 8u ticks: % 8u\n' 99 0 3328 67 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@109 -- # printf 'WRITE IO cnt: % 8u merges: % 8u sectors: % 8u ticks: % 8u\n' 632626 629893 10887528 429281 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@111 -- # printf 'in flight: % 8u io ticks: % 8u time in queue: % 8u\n' 0 210759 468492 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@1 -- # cleanup 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_delete Nvme0n1 00:31:50.702 [2024-07-24 05:21:04.395376] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1p0) received event(SPDK_BDEV_EVENT_REMOVE) 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@13 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_delete EE_Malloc0 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@15 -- # killprocess 86636 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@948 -- # '[' -z 86636 ']' 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@952 -- # kill -0 86636 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@953 -- # uname 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86636 00:31:50.702 killing process with pid 86636 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86636' 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@967 -- # kill 86636 00:31:50.702 05:21:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@972 -- # wait 86636 00:31:54.927 05:21:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@17 -- # mountpoint -q /mnt/sdadir 00:31:54.927 05:21:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@18 -- # rm -rf /mnt/sdadir 00:31:54.927 Cleaning up iSCSI connection 00:31:54.927 05:21:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@20 -- # iscsicleanup 00:31:54.927 05:21:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:31:54.927 05:21:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:31:54.927 Logging out of session [sid: 72, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] 00:31:54.927 Logout of [sid: 72, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:31:54.927 05:21:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:31:54.927 05:21:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@983 -- # rm -rf 00:31:54.927 05:21:08 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@21 -- # iscsitestfini 00:31:54.927 05:21:08 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:31:54.927 00:31:54.927 real 5m47.380s 00:31:54.927 user 9m34.046s 00:31:54.927 sys 3m0.200s 00:31:54.927 05:21:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:54.927 05:21:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:31:54.927 ************************************ 00:31:54.927 END TEST iscsi_tgt_ext4test 00:31:54.927 ************************************ 00:31:54.927 05:21:08 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@49 -- # '[' 0 -eq 1 ']' 00:31:54.927 05:21:08 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@57 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:31:54.927 05:21:08 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@59 -- # '[' 0 -eq 1 ']' 00:31:54.927 05:21:08 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@65 -- # cleanup_veth_interfaces 00:31:54.927 05:21:08 iscsi_tgt -- iscsi_tgt/common.sh@95 -- # ip link set init_br nomaster 00:31:54.927 05:21:08 iscsi_tgt -- iscsi_tgt/common.sh@96 -- # ip link set tgt_br nomaster 00:31:54.927 05:21:08 iscsi_tgt -- iscsi_tgt/common.sh@97 -- # ip link set tgt_br2 nomaster 00:31:54.927 05:21:08 iscsi_tgt -- iscsi_tgt/common.sh@98 -- # ip link set init_br down 00:31:54.927 05:21:08 iscsi_tgt -- iscsi_tgt/common.sh@99 -- # ip link set tgt_br down 00:31:54.927 05:21:08 iscsi_tgt -- iscsi_tgt/common.sh@100 -- # ip link set tgt_br2 down 00:31:54.927 05:21:08 iscsi_tgt -- iscsi_tgt/common.sh@101 -- # ip link delete iscsi_br type bridge 00:31:54.927 05:21:08 iscsi_tgt -- iscsi_tgt/common.sh@102 -- # ip link delete spdk_init_int 00:31:54.927 05:21:09 iscsi_tgt -- iscsi_tgt/common.sh@103 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:31:54.927 05:21:09 iscsi_tgt -- iscsi_tgt/common.sh@104 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:31:54.927 05:21:09 iscsi_tgt -- iscsi_tgt/common.sh@105 -- # ip netns del spdk_iscsi_ns 00:31:54.927 05:21:09 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:31:54.927 00:31:54.927 real 20m1.709s 00:31:54.927 user 36m7.147s 00:31:54.927 sys 7m14.506s 00:31:54.927 05:21:09 iscsi_tgt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:54.927 05:21:09 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:31:54.927 ************************************ 00:31:54.927 END TEST iscsi_tgt 00:31:54.928 ************************************ 00:31:54.928 05:21:09 -- spdk/autotest.sh@264 -- # run_test spdkcli_iscsi /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:31:54.928 05:21:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:54.928 05:21:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:54.928 05:21:09 -- common/autotest_common.sh@10 -- # set +x 00:31:54.928 ************************************ 00:31:54.928 START TEST spdkcli_iscsi 00:31:54.928 ************************************ 00:31:54.928 05:21:09 spdkcli_iscsi -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:31:54.928 * Looking for test storage... 00:31:54.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:31:54.928 05:21:09 spdkcli_iscsi -- spdkcli/iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:31:54.928 05:21:09 spdkcli_iscsi -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:31:54.928 05:21:09 spdkcli_iscsi -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:31:54.928 05:21:09 spdkcli_iscsi -- spdkcli/iscsi.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:31:54.928 05:21:09 spdkcli_iscsi -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:31:54.928 05:21:09 spdkcli_iscsi -- spdkcli/iscsi.sh@12 -- # MATCH_FILE=spdkcli_iscsi.test 00:31:54.928 05:21:09 spdkcli_iscsi -- spdkcli/iscsi.sh@13 -- # SPDKCLI_BRANCH=/iscsi 00:31:54.928 05:21:09 spdkcli_iscsi -- spdkcli/iscsi.sh@15 -- # trap cleanup EXIT 00:31:54.928 05:21:09 spdkcli_iscsi -- spdkcli/iscsi.sh@17 -- # timing_enter run_iscsi_tgt 00:31:54.928 05:21:09 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:54.928 05:21:09 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:31:54.928 05:21:09 spdkcli_iscsi -- spdkcli/iscsi.sh@21 -- # iscsi_tgt_pid=125537 00:31:54.928 05:21:09 spdkcli_iscsi -- spdkcli/iscsi.sh@22 -- # waitforlisten 125537 00:31:54.928 05:21:09 spdkcli_iscsi -- spdkcli/iscsi.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x3 -p 0 --wait-for-rpc 00:31:54.928 05:21:09 spdkcli_iscsi -- common/autotest_common.sh@829 -- # '[' -z 125537 ']' 00:31:54.928 05:21:09 spdkcli_iscsi -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.928 05:21:09 spdkcli_iscsi -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:54.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.928 05:21:09 spdkcli_iscsi -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.928 05:21:09 spdkcli_iscsi -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:54.928 05:21:09 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:31:54.928 [2024-07-24 05:21:09.352949] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:31:54.928 [2024-07-24 05:21:09.353115] [ DPDK EAL parameters: iscsi --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125537 ] 00:31:54.928 [2024-07-24 05:21:09.533708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:55.186 [2024-07-24 05:21:09.760889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.186 [2024-07-24 05:21:09.760918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.753 05:21:10 spdkcli_iscsi -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:55.753 05:21:10 spdkcli_iscsi -- common/autotest_common.sh@862 -- # return 0 00:31:55.753 05:21:10 spdkcli_iscsi -- spdkcli/iscsi.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:31:56.012 [2024-07-24 05:21:10.640236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:31:56.948 05:21:11 spdkcli_iscsi -- spdkcli/iscsi.sh@25 -- # timing_exit run_iscsi_tgt 00:31:56.948 05:21:11 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:56.948 05:21:11 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:31:56.948 05:21:11 spdkcli_iscsi -- spdkcli/iscsi.sh@27 -- # timing_enter spdkcli_create_iscsi_config 00:31:56.948 05:21:11 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:56.948 05:21:11 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:31:56.948 05:21:11 spdkcli_iscsi -- spdkcli/iscsi.sh@48 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc0'\'' '\''Malloc0'\'' True 00:31:56.948 '\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:56.948 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:56.948 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:56.948 '\''/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"'\'' '\''host=127.0.0.1, port=3261'\'' True 00:31:56.948 '\''/iscsi/portal_groups create 2 127.0.0.1:3262'\'' '\''host=127.0.0.1, port=3262'\'' True 00:31:56.948 '\''/iscsi/initiator_groups create 2 ANY 10.0.2.15/32'\'' '\''hostname=ANY, netmask=10.0.2.15/32'\'' True 00:31:56.948 '\''/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32'\'' '\''hostname=ANZ, netmask=10.0.2.15/32'\'' True 00:31:56.948 '\''/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32'\'' '\''hostname=ANW, netmask=10.0.2.16'\'' True 00:31:56.948 '\''/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1'\'' '\''Target0'\'' True 00:31:56.948 '\''/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1'\'' '\''Target1'\'' True 00:31:56.948 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' True 00:31:56.948 '\''/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2'\'' '\''Malloc3'\'' True 00:31:56.948 '\''/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"'\'' '\''user=test3'\'' True 00:31:56.948 '\''/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2'\'' '\''user=test2'\'' True 00:31:56.948 '\''/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"'\'' '\''user=test4'\'' True 00:31:56.948 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true'\'' '\''disable_chap: True'\'' True 00:31:56.948 '\''/iscsi/global_params set_auth g=1 d=true r=false'\'' '\''disable_chap: True'\'' True 00:31:56.948 '\''/iscsi ls'\'' '\''Malloc'\'' True 00:31:56.948 ' 00:32:05.067 Executing command: ['/bdevs/malloc create 32 512 Malloc0', 'Malloc0', True] 00:32:05.067 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:05.067 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:05.067 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:05.067 Executing command: ['/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"', 'host=127.0.0.1, port=3261', True] 00:32:05.067 Executing command: ['/iscsi/portal_groups create 2 127.0.0.1:3262', 'host=127.0.0.1, port=3262', True] 00:32:05.067 Executing command: ['/iscsi/initiator_groups create 2 ANY 10.0.2.15/32', 'hostname=ANY, netmask=10.0.2.15/32', True] 00:32:05.067 Executing command: ['/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32', 'hostname=ANZ, netmask=10.0.2.15/32', True] 00:32:05.067 Executing command: ['/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32', 'hostname=ANW, netmask=10.0.2.16', True] 00:32:05.067 Executing command: ['/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1', 'Target0', True] 00:32:05.067 Executing command: ['/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1', 'Target1', True] 00:32:05.067 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', True] 00:32:05.067 Executing command: ['/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2', 'Malloc3', True] 00:32:05.067 Executing command: ['/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"', 'user=test3', True] 00:32:05.067 Executing command: ['/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2', 'user=test2', True] 00:32:05.067 Executing command: ['/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"', 'user=test4', True] 00:32:05.067 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true', 'disable_chap: True', True] 00:32:05.067 Executing command: ['/iscsi/global_params set_auth g=1 d=true r=false', 'disable_chap: True', True] 00:32:05.067 Executing command: ['/iscsi ls', 'Malloc', True] 00:32:05.067 05:21:18 spdkcli_iscsi -- spdkcli/iscsi.sh@49 -- # timing_exit spdkcli_create_iscsi_config 00:32:05.067 05:21:18 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:05.067 05:21:18 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:05.067 05:21:18 spdkcli_iscsi -- spdkcli/iscsi.sh@51 -- # timing_enter spdkcli_check_match 00:32:05.067 05:21:18 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:05.067 05:21:18 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:05.067 05:21:18 spdkcli_iscsi -- spdkcli/iscsi.sh@52 -- # check_match 00:32:05.067 05:21:18 spdkcli_iscsi -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /iscsi 00:32:05.067 05:21:19 spdkcli_iscsi -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test.match 00:32:05.067 05:21:19 spdkcli_iscsi -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test 00:32:05.067 05:21:19 spdkcli_iscsi -- spdkcli/iscsi.sh@53 -- # timing_exit spdkcli_check_match 00:32:05.067 05:21:19 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:05.067 05:21:19 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:05.067 05:21:19 spdkcli_iscsi -- spdkcli/iscsi.sh@55 -- # timing_enter spdkcli_clear_iscsi_config 00:32:05.067 05:21:19 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:05.067 05:21:19 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:05.067 05:21:19 spdkcli_iscsi -- spdkcli/iscsi.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/iscsi/auth_groups delete_secret 1 test2'\'' '\''user=test2'\'' 00:32:05.067 '\''/iscsi/auth_groups delete_secret_all 1'\'' '\''user=test1'\'' 00:32:05.067 '\''/iscsi/auth_groups delete 1'\'' '\''user=test1'\'' 00:32:05.067 '\''/iscsi/auth_groups delete_all'\'' '\''user=test4'\'' 00:32:05.067 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' 00:32:05.067 '\''/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1'\'' '\''Target1'\'' 00:32:05.067 '\''/iscsi/target_nodes delete_all'\'' '\''Target0'\'' 00:32:05.067 '\''/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32'\'' '\''ANW'\'' 00:32:05.067 '\''/iscsi/initiator_groups delete 3'\'' '\''ANZ'\'' 00:32:05.067 '\''/iscsi/initiator_groups delete_all'\'' '\''ANY'\'' 00:32:05.067 '\''/iscsi/portal_groups delete 1'\'' '\''127.0.0.1:3261'\'' 00:32:05.067 '\''/iscsi/portal_groups delete_all'\'' '\''127.0.0.1:3262'\'' 00:32:05.067 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:05.067 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:05.067 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:05.067 '\''/bdevs/malloc delete Malloc0'\'' '\''Malloc0'\'' 00:32:05.067 ' 00:32:11.633 Executing command: ['/iscsi/auth_groups delete_secret 1 test2', 'user=test2', False] 00:32:11.633 Executing command: ['/iscsi/auth_groups delete_secret_all 1', 'user=test1', False] 00:32:11.633 Executing command: ['/iscsi/auth_groups delete 1', 'user=test1', False] 00:32:11.633 Executing command: ['/iscsi/auth_groups delete_all', 'user=test4', False] 00:32:11.633 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', False] 00:32:11.633 Executing command: ['/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1', 'Target1', False] 00:32:11.633 Executing command: ['/iscsi/target_nodes delete_all', 'Target0', False] 00:32:11.633 Executing command: ['/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32', 'ANW', False] 00:32:11.633 Executing command: ['/iscsi/initiator_groups delete 3', 'ANZ', False] 00:32:11.633 Executing command: ['/iscsi/initiator_groups delete_all', 'ANY', False] 00:32:11.633 Executing command: ['/iscsi/portal_groups delete 1', '127.0.0.1:3261', False] 00:32:11.633 Executing command: ['/iscsi/portal_groups delete_all', '127.0.0.1:3262', False] 00:32:11.633 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:11.633 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:11.633 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:11.633 Executing command: ['/bdevs/malloc delete Malloc0', 'Malloc0', False] 00:32:11.633 05:21:26 spdkcli_iscsi -- spdkcli/iscsi.sh@73 -- # timing_exit spdkcli_clear_iscsi_config 00:32:11.633 05:21:26 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:11.633 05:21:26 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:11.633 05:21:26 spdkcli_iscsi -- spdkcli/iscsi.sh@75 -- # killprocess 125537 00:32:11.633 05:21:26 spdkcli_iscsi -- common/autotest_common.sh@948 -- # '[' -z 125537 ']' 00:32:11.633 05:21:26 spdkcli_iscsi -- common/autotest_common.sh@952 -- # kill -0 125537 00:32:11.633 05:21:26 spdkcli_iscsi -- common/autotest_common.sh@953 -- # uname 00:32:11.633 05:21:26 spdkcli_iscsi -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:11.633 05:21:26 spdkcli_iscsi -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125537 00:32:11.633 05:21:26 spdkcli_iscsi -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:11.633 05:21:26 spdkcli_iscsi -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:11.633 killing process with pid 125537 00:32:11.633 05:21:26 spdkcli_iscsi -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125537' 00:32:11.633 05:21:26 spdkcli_iscsi -- common/autotest_common.sh@967 -- # kill 125537 00:32:11.633 05:21:26 spdkcli_iscsi -- common/autotest_common.sh@972 -- # wait 125537 00:32:14.166 05:21:28 spdkcli_iscsi -- spdkcli/iscsi.sh@1 -- # cleanup 00:32:14.166 05:21:28 spdkcli_iscsi -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:14.166 05:21:28 spdkcli_iscsi -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:32:14.166 05:21:28 spdkcli_iscsi -- spdkcli/common.sh@16 -- # '[' -n 125537 ']' 00:32:14.166 05:21:28 spdkcli_iscsi -- spdkcli/common.sh@17 -- # killprocess 125537 00:32:14.166 05:21:28 spdkcli_iscsi -- common/autotest_common.sh@948 -- # '[' -z 125537 ']' 00:32:14.166 05:21:28 spdkcli_iscsi -- common/autotest_common.sh@952 -- # kill -0 125537 00:32:14.166 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (125537) - No such process 00:32:14.166 Process with pid 125537 is not found 00:32:14.167 05:21:28 spdkcli_iscsi -- common/autotest_common.sh@975 -- # echo 'Process with pid 125537 is not found' 00:32:14.167 05:21:28 spdkcli_iscsi -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:14.167 05:21:28 spdkcli_iscsi -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_iscsi.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:14.167 00:32:14.167 real 0m19.469s 00:32:14.167 user 0m40.234s 00:32:14.167 sys 0m1.319s 00:32:14.167 05:21:28 spdkcli_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:14.167 05:21:28 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:32:14.167 ************************************ 00:32:14.167 END TEST spdkcli_iscsi 00:32:14.167 ************************************ 00:32:14.167 05:21:28 -- spdk/autotest.sh@267 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:32:14.167 05:21:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:14.167 05:21:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:14.167 05:21:28 -- common/autotest_common.sh@10 -- # set +x 00:32:14.167 ************************************ 00:32:14.167 START TEST spdkcli_raid 00:32:14.167 ************************************ 00:32:14.167 05:21:28 spdkcli_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:32:14.167 * Looking for test storage... 00:32:14.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:32:14.167 05:21:28 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:32:14.167 05:21:28 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:14.167 05:21:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=125868 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 125868 00:32:14.167 05:21:28 spdkcli_raid -- common/autotest_common.sh@829 -- # '[' -z 125868 ']' 00:32:14.167 05:21:28 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:32:14.167 05:21:28 spdkcli_raid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.167 05:21:28 spdkcli_raid -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:14.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.167 05:21:28 spdkcli_raid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.167 05:21:28 spdkcli_raid -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:14.167 05:21:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:14.426 [2024-07-24 05:21:28.896264] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.03.0 initialization... 00:32:14.426 [2024-07-24 05:21:28.896433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125868 ] 00:32:14.685 [2024-07-24 05:21:29.077409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:14.685 [2024-07-24 05:21:29.297435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.685 [2024-07-24 05:21:29.297458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:14.944 [2024-07-24 05:21:29.534246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:32:15.881 05:21:30 spdkcli_raid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:15.881 05:21:30 spdkcli_raid -- common/autotest_common.sh@862 -- # return 0 00:32:15.881 05:21:30 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:32:15.881 05:21:30 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:15.881 05:21:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:15.881 05:21:30 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:32:15.881 05:21:30 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:15.881 05:21:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:15.881 05:21:30 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:15.881 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:15.881 ' 00:32:17.259 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:32:17.259 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:32:17.259 05:21:31 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:32:17.259 05:21:31 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:17.259 05:21:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:17.259 05:21:31 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:32:17.259 05:21:31 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:17.259 05:21:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:17.518 05:21:31 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:32:17.518 ' 00:32:18.455 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:32:18.455 05:21:32 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:32:18.455 05:21:32 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:18.455 05:21:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:18.455 05:21:33 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:32:18.455 05:21:33 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:18.455 05:21:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:18.455 05:21:33 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:32:18.455 05:21:33 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:32:19.022 05:21:33 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:32:19.022 05:21:33 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:32:19.022 05:21:33 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:32:19.022 05:21:33 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:19.022 05:21:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:19.022 05:21:33 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:32:19.022 05:21:33 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:19.022 05:21:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:19.022 05:21:33 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:32:19.022 ' 00:32:20.400 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:32:20.400 05:21:34 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:32:20.400 05:21:34 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:20.400 05:21:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:20.400 05:21:34 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:32:20.400 05:21:34 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:20.400 05:21:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:20.400 05:21:34 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:32:20.400 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:32:20.400 ' 00:32:21.827 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:32:21.827 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:32:21.827 05:21:36 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:32:21.827 05:21:36 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:21.827 05:21:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:21.827 05:21:36 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 125868 00:32:21.828 05:21:36 spdkcli_raid -- common/autotest_common.sh@948 -- # '[' -z 125868 ']' 00:32:21.828 05:21:36 spdkcli_raid -- common/autotest_common.sh@952 -- # kill -0 125868 00:32:21.828 05:21:36 spdkcli_raid -- common/autotest_common.sh@953 -- # uname 00:32:21.828 05:21:36 spdkcli_raid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:21.828 05:21:36 spdkcli_raid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125868 00:32:21.828 05:21:36 spdkcli_raid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:21.828 05:21:36 spdkcli_raid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:21.828 killing process with pid 125868 00:32:21.828 05:21:36 spdkcli_raid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125868' 00:32:21.828 05:21:36 spdkcli_raid -- common/autotest_common.sh@967 -- # kill 125868 00:32:21.828 05:21:36 spdkcli_raid -- common/autotest_common.sh@972 -- # wait 125868 00:32:24.363 05:21:38 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:32:24.363 05:21:38 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 125868 ']' 00:32:24.363 05:21:38 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 125868 00:32:24.363 05:21:38 spdkcli_raid -- common/autotest_common.sh@948 -- # '[' -z 125868 ']' 00:32:24.363 05:21:38 spdkcli_raid -- common/autotest_common.sh@952 -- # kill -0 125868 00:32:24.363 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (125868) - No such process 00:32:24.363 Process with pid 125868 is not found 00:32:24.363 05:21:38 spdkcli_raid -- common/autotest_common.sh@975 -- # echo 'Process with pid 125868 is not found' 00:32:24.363 05:21:38 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:32:24.363 05:21:38 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:24.363 05:21:38 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:24.363 05:21:38 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:24.363 00:32:24.363 real 0m10.146s 00:32:24.363 user 0m20.571s 00:32:24.363 sys 0m1.131s 00:32:24.363 05:21:38 spdkcli_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:24.363 ************************************ 00:32:24.363 END TEST spdkcli_raid 00:32:24.363 ************************************ 00:32:24.363 05:21:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:32:24.363 05:21:38 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:32:24.363 05:21:38 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:32:24.363 05:21:38 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:24.363 05:21:38 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:24.363 05:21:38 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:24.363 05:21:38 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:24.363 05:21:38 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:24.363 05:21:38 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:24.363 05:21:38 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:24.363 05:21:38 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:24.363 05:21:38 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:24.363 05:21:38 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:24.363 05:21:38 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:24.363 05:21:38 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:24.363 05:21:38 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:24.363 05:21:38 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:24.363 05:21:38 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:24.363 05:21:38 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:24.363 05:21:38 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:24.363 05:21:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:24.363 05:21:38 -- common/autotest_common.sh@10 -- # set +x 00:32:24.363 05:21:38 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:24.363 05:21:38 -- common/autotest_common.sh@1390 -- # local autotest_es=0 00:32:24.363 05:21:38 -- common/autotest_common.sh@1391 -- # xtrace_disable 00:32:24.363 05:21:38 -- common/autotest_common.sh@10 -- # set +x 00:32:26.268 INFO: APP EXITING 00:32:26.268 INFO: killing all VMs 00:32:26.268 INFO: killing vhost app 00:32:26.268 INFO: EXIT DONE 00:32:26.527 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:26.527 Waiting for block devices as requested 00:32:26.527 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:26.786 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:27.355 0000:00:10.0 (1b36 0010): Active devices: data@nvme1n1, so not binding PCI dev 00:32:27.614 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:27.614 Cleaning 00:32:27.614 Removing: /var/run/dpdk/spdk0/config 00:32:27.614 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:27.614 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:27.614 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:27.614 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:27.614 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:27.614 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:27.614 Removing: /var/run/dpdk/spdk1/config 00:32:27.614 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:27.614 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:27.614 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:27.614 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:27.614 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:27.614 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:27.614 Removing: /dev/shm/iscsi_trace.pid82001 00:32:27.614 Removing: /dev/shm/spdk_tgt_trace.pid58948 00:32:27.614 Removing: /var/run/dpdk/spdk0 00:32:27.614 Removing: /var/run/dpdk/spdk1 00:32:27.614 Removing: /var/run/dpdk/spdk_pid125537 00:32:27.614 Removing: /var/run/dpdk/spdk_pid125868 00:32:27.614 Removing: /var/run/dpdk/spdk_pid58710 00:32:27.615 Removing: /var/run/dpdk/spdk_pid58948 00:32:27.615 Removing: /var/run/dpdk/spdk_pid59169 00:32:27.615 Removing: /var/run/dpdk/spdk_pid59273 00:32:27.615 Removing: /var/run/dpdk/spdk_pid59329 00:32:27.615 Removing: /var/run/dpdk/spdk_pid59468 00:32:27.615 Removing: /var/run/dpdk/spdk_pid59486 00:32:27.615 Removing: /var/run/dpdk/spdk_pid59640 00:32:27.615 Removing: /var/run/dpdk/spdk_pid59839 00:32:27.615 Removing: /var/run/dpdk/spdk_pid59998 00:32:27.615 Removing: /var/run/dpdk/spdk_pid60096 00:32:27.615 Removing: /var/run/dpdk/spdk_pid60201 00:32:27.615 Removing: /var/run/dpdk/spdk_pid60315 00:32:27.615 Removing: /var/run/dpdk/spdk_pid60415 00:32:27.615 Removing: /var/run/dpdk/spdk_pid60460 00:32:27.615 Removing: /var/run/dpdk/spdk_pid60502 00:32:27.615 Removing: /var/run/dpdk/spdk_pid60570 00:32:27.615 Removing: /var/run/dpdk/spdk_pid60676 00:32:27.615 Removing: /var/run/dpdk/spdk_pid61115 00:32:27.615 Removing: /var/run/dpdk/spdk_pid61196 00:32:27.615 Removing: /var/run/dpdk/spdk_pid61272 00:32:27.615 Removing: /var/run/dpdk/spdk_pid61293 00:32:27.615 Removing: /var/run/dpdk/spdk_pid61451 00:32:27.874 Removing: /var/run/dpdk/spdk_pid61478 00:32:27.874 Removing: /var/run/dpdk/spdk_pid61631 00:32:27.874 Removing: /var/run/dpdk/spdk_pid61653 00:32:27.874 Removing: /var/run/dpdk/spdk_pid61717 00:32:27.874 Removing: /var/run/dpdk/spdk_pid61741 00:32:27.874 Removing: /var/run/dpdk/spdk_pid61810 00:32:27.874 Removing: /var/run/dpdk/spdk_pid61828 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62021 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62063 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62144 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62225 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62256 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62334 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62386 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62433 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62479 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62526 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62578 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62624 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62671 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62723 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62764 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62816 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62863 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62909 00:32:27.874 Removing: /var/run/dpdk/spdk_pid62956 00:32:27.874 Removing: /var/run/dpdk/spdk_pid63003 00:32:27.874 Removing: /var/run/dpdk/spdk_pid63053 00:32:27.874 Removing: /var/run/dpdk/spdk_pid63101 00:32:27.874 Removing: /var/run/dpdk/spdk_pid63156 00:32:27.874 Removing: /var/run/dpdk/spdk_pid63200 00:32:27.874 Removing: /var/run/dpdk/spdk_pid63252 00:32:27.875 Removing: /var/run/dpdk/spdk_pid63301 00:32:27.875 Removing: /var/run/dpdk/spdk_pid63387 00:32:27.875 Removing: /var/run/dpdk/spdk_pid63504 00:32:27.875 Removing: /var/run/dpdk/spdk_pid63842 00:32:27.875 Removing: /var/run/dpdk/spdk_pid63855 00:32:27.875 Removing: /var/run/dpdk/spdk_pid63909 00:32:27.875 Removing: /var/run/dpdk/spdk_pid63940 00:32:27.875 Removing: /var/run/dpdk/spdk_pid63973 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64014 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64041 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64074 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64110 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64141 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64173 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64211 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64242 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64275 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64306 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64338 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64371 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64408 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64439 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64473 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64522 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64553 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64600 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64676 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64728 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64755 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64801 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64828 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64853 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64913 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64944 00:32:27.875 Removing: /var/run/dpdk/spdk_pid64991 00:32:27.875 Removing: /var/run/dpdk/spdk_pid65018 00:32:27.875 Removing: /var/run/dpdk/spdk_pid65045 00:32:27.875 Removing: /var/run/dpdk/spdk_pid65072 00:32:27.875 Removing: /var/run/dpdk/spdk_pid65105 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65132 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65160 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65187 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65233 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65277 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65304 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65350 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65376 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65397 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65455 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65487 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65534 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65559 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65584 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65609 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65634 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65659 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65684 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65709 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65795 00:32:28.134 Removing: /var/run/dpdk/spdk_pid65899 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66049 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66104 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66168 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66200 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66234 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66266 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66316 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66349 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66437 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66487 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66565 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66699 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66781 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66840 00:32:28.134 Removing: /var/run/dpdk/spdk_pid66973 00:32:28.134 Removing: /var/run/dpdk/spdk_pid67033 00:32:28.134 Removing: /var/run/dpdk/spdk_pid67089 00:32:28.134 Removing: /var/run/dpdk/spdk_pid67336 00:32:28.134 Removing: /var/run/dpdk/spdk_pid67460 00:32:28.134 Removing: /var/run/dpdk/spdk_pid67505 00:32:28.134 Removing: /var/run/dpdk/spdk_pid67772 00:32:28.134 Removing: /var/run/dpdk/spdk_pid67798 00:32:28.134 Removing: /var/run/dpdk/spdk_pid67829 00:32:28.134 Removing: /var/run/dpdk/spdk_pid67879 00:32:28.134 Removing: /var/run/dpdk/spdk_pid67884 00:32:28.134 Removing: /var/run/dpdk/spdk_pid67913 00:32:28.134 Removing: /var/run/dpdk/spdk_pid67941 00:32:28.134 Removing: /var/run/dpdk/spdk_pid67957 00:32:28.134 Removing: /var/run/dpdk/spdk_pid68008 00:32:28.134 Removing: /var/run/dpdk/spdk_pid68034 00:32:28.134 Removing: /var/run/dpdk/spdk_pid68092 00:32:28.134 Removing: /var/run/dpdk/spdk_pid68185 00:32:28.134 Removing: /var/run/dpdk/spdk_pid68964 00:32:28.134 Removing: /var/run/dpdk/spdk_pid70629 00:32:28.134 Removing: /var/run/dpdk/spdk_pid70929 00:32:28.134 Removing: /var/run/dpdk/spdk_pid71252 00:32:28.134 Removing: /var/run/dpdk/spdk_pid71520 00:32:28.134 Removing: /var/run/dpdk/spdk_pid72104 00:32:28.135 Removing: /var/run/dpdk/spdk_pid76779 00:32:28.135 Removing: /var/run/dpdk/spdk_pid80850 00:32:28.135 Removing: /var/run/dpdk/spdk_pid81626 00:32:28.135 Removing: /var/run/dpdk/spdk_pid81666 00:32:28.135 Removing: /var/run/dpdk/spdk_pid82001 00:32:28.135 Removing: /var/run/dpdk/spdk_pid83365 00:32:28.135 Removing: /var/run/dpdk/spdk_pid83772 00:32:28.135 Removing: /var/run/dpdk/spdk_pid83829 00:32:28.135 Removing: /var/run/dpdk/spdk_pid84237 00:32:28.135 Removing: /var/run/dpdk/spdk_pid86636 00:32:28.135 Clean 00:32:28.394 05:21:42 -- common/autotest_common.sh@1449 -- # return 0 00:32:28.394 05:21:42 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:28.394 05:21:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:28.394 05:21:42 -- common/autotest_common.sh@10 -- # set +x 00:32:28.394 05:21:42 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:28.394 05:21:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:28.394 05:21:42 -- common/autotest_common.sh@10 -- # set +x 00:32:28.394 05:21:42 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:28.394 05:21:42 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:28.394 05:21:42 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:28.394 05:21:42 -- spdk/autotest.sh@391 -- # hash lcov 00:32:28.394 05:21:42 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:28.394 05:21:42 -- spdk/autotest.sh@393 -- # hostname 00:32:28.394 05:21:42 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:28.653 geninfo: WARNING: invalid characters removed from testname! 00:32:50.598 05:22:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:52.496 05:22:06 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:54.407 05:22:08 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:56.939 05:22:11 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:58.842 05:22:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:00.745 05:22:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:03.279 05:22:17 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:03.279 05:22:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:03.279 05:22:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:03.279 05:22:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.279 05:22:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.279 05:22:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.279 05:22:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.279 05:22:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.279 05:22:17 -- paths/export.sh@5 -- $ export PATH 00:33:03.279 05:22:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.279 05:22:17 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:03.279 05:22:17 -- common/autobuild_common.sh@447 -- $ date +%s 00:33:03.279 05:22:17 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721798537.XXXXXX 00:33:03.279 05:22:17 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721798537.ZTfPRC 00:33:03.279 05:22:17 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:33:03.279 05:22:17 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:33:03.279 05:22:17 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:03.279 05:22:17 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:03.279 05:22:17 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:03.279 05:22:17 -- common/autobuild_common.sh@463 -- $ get_config_params 00:33:03.279 05:22:17 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:03.279 05:22:17 -- common/autotest_common.sh@10 -- $ set +x 00:33:03.279 05:22:17 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-uring' 00:33:03.279 05:22:17 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:33:03.279 05:22:17 -- pm/common@17 -- $ local monitor 00:33:03.279 05:22:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:03.279 05:22:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:03.279 05:22:17 -- pm/common@25 -- $ sleep 1 00:33:03.279 05:22:17 -- pm/common@21 -- $ date +%s 00:33:03.279 05:22:17 -- pm/common@21 -- $ date +%s 00:33:03.279 05:22:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721798537 00:33:03.279 05:22:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721798537 00:33:03.279 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721798537_collect-vmstat.pm.log 00:33:03.279 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721798537_collect-cpu-load.pm.log 00:33:03.846 05:22:18 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:33:03.846 05:22:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:33:03.846 05:22:18 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:33:03.846 05:22:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:03.846 05:22:18 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:03.846 05:22:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:03.846 05:22:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:03.846 05:22:18 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:03.846 05:22:18 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:03.846 05:22:18 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:04.106 05:22:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:04.106 05:22:18 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:04.106 05:22:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:04.106 05:22:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:04.106 05:22:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:04.106 05:22:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:04.106 05:22:18 -- pm/common@44 -- $ pid=127631 00:33:04.106 05:22:18 -- pm/common@50 -- $ kill -TERM 127631 00:33:04.106 05:22:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:04.106 05:22:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:04.106 05:22:18 -- pm/common@44 -- $ pid=127632 00:33:04.106 05:22:18 -- pm/common@50 -- $ kill -TERM 127632 00:33:04.106 + [[ -n 5157 ]] 00:33:04.106 + sudo kill 5157 00:33:04.116 [Pipeline] } 00:33:04.135 [Pipeline] // timeout 00:33:04.141 [Pipeline] } 00:33:04.159 [Pipeline] // stage 00:33:04.164 [Pipeline] } 00:33:04.181 [Pipeline] // catchError 00:33:04.191 [Pipeline] stage 00:33:04.194 [Pipeline] { (Stop VM) 00:33:04.208 [Pipeline] sh 00:33:04.517 + vagrant halt 00:33:07.806 ==> default: Halting domain... 00:33:14.381 [Pipeline] sh 00:33:14.661 + vagrant destroy -f 00:33:17.945 ==> default: Removing domain... 00:33:17.996 [Pipeline] sh 00:33:18.311 + mv output /var/jenkins/workspace/iscsi-uring-vg-autotest/output 00:33:18.320 [Pipeline] } 00:33:18.337 [Pipeline] // stage 00:33:18.343 [Pipeline] } 00:33:18.360 [Pipeline] // dir 00:33:18.367 [Pipeline] } 00:33:18.385 [Pipeline] // wrap 00:33:18.391 [Pipeline] } 00:33:18.407 [Pipeline] // catchError 00:33:18.416 [Pipeline] stage 00:33:18.419 [Pipeline] { (Epilogue) 00:33:18.435 [Pipeline] sh 00:33:18.716 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:23.992 [Pipeline] catchError 00:33:23.994 [Pipeline] { 00:33:24.009 [Pipeline] sh 00:33:24.292 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:24.550 Artifacts sizes are good 00:33:24.558 [Pipeline] } 00:33:24.573 [Pipeline] // catchError 00:33:24.583 [Pipeline] archiveArtifacts 00:33:24.590 Archiving artifacts 00:33:25.707 [Pipeline] cleanWs 00:33:25.717 [WS-CLEANUP] Deleting project workspace... 00:33:25.717 [WS-CLEANUP] Deferred wipeout is used... 00:33:25.723 [WS-CLEANUP] done 00:33:25.725 [Pipeline] } 00:33:25.742 [Pipeline] // stage 00:33:25.747 [Pipeline] } 00:33:25.763 [Pipeline] // node 00:33:25.769 [Pipeline] End of Pipeline 00:33:25.810 Finished: SUCCESS